Los puntos clave no están disponibles para este artículo en este momento.
v4. 10. 0: LayoutLM-v2, LayoutXLM, BEiT LayoutLM-v2 and LayoutXLM Four new models are released as part of the LatourLM-v2 implementation: LayoutLMv2ForSequenceClassification, LayoutLMv2Model, LayoutLMv2ForTokenClassification and LayoutLMv2ForQuestionAnswering, in PyTorch. The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain state-of-the-art results across several document image understanding benchmarks: Add LayoutLMv2 + LayoutXLM #12604 (@NielsRogge) Compatible checkpoints can be found on the Hub: https: //huggingface. co/models? filter=layoutlmv2 BEiT Three new models are released as part of the BEiT implementation: BeitModel, BeitForMaskedImageModeling, and BeitForImageClassification, in PyTorch. The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class of an image (as done in the original ViT paper), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI's DALL-E model given masked patches. Add BEiT #12994 (@NielsRogge) Compatible checkpoints can be found on the Hub: https: //huggingface. co/models? filter=beit Speech improvements The Wav2Vec2 and HuBERT models now have a sequence classification head available. Add Wav2Vec2 & Hubert ForSequenceClassification #13153 (@anton-l) DeBERTa in TensorFlow (@kamalkraj) The DeBERTa and DeBERTa-v2 models have been converted from PyTorch to TensorFlow. Deberta tf #12972 (@kamalkraj) Debertaᵥ2 tf #13120 (@kamalkraj) Flax model additions EncoderDecoder, DistilBERT, and ALBERT, now have support in Flax! FlaxEncoderDecoder allowing Bert2Bert and Bert2GPT2 in Flax #13008 (@ydshieh) FlaxDistilBERT #13324 (@kamalkraj) FlaxAlBERT #13294 (@kamalkraj) TensorFlow examples A new example has been added in TensorFlow: multiple choice! Data collators have become framework agnostic and can now work for both TensorFlow and NumPy on top of PyTorch. Add TF multiple choice example #12865 (@Rocketknight1) TF/Numpy variants for all DataCollator classes #13105 (@Rocketknight1) Auto API refactor The Auto APIs have been disentangled from all the other mode modules of the Transformers library, so you can now safely import the Auto classes without importing all the models (and maybe getting errors if your setup is not compatible with one specific model). The actual model classes are only imported when needed. Disentangle auto modules from other modeling files #13023 (@sgugger) Fix AutoTokenizer when no fast tokenizer is available #13336 (@sgugger) Slight breaking change When loading some kinds of corrupted state dictionaries of models, the PreTrainedModel. fromₚretrained method was sometimes silently ignoring weights. This has now become a real error. Fix fromₚretrained with corrupted statedict #12939 (@sgugger) General improvements and bugfixes Improving pipeline tests #12784 (@Narsil) Pin git python to classifierdropout to classification heads #12794 (@PhilipMay) Fix barrier for SM distributed #12853 (@sgugger) Add possibility to ignore imports in testfecther #12801 (@sgugger) Add accelerate to examples requirements #12888 (@sgugger) Fix documentation of BigBird tokenizer #12889 (@sgugger) Better heuristic for token-classification pipeline. #12611 (@Narsil) Fix pushₜoₕub for TPUs #12895 (@sgugger) Seq2SeqTrainer set maxₗength and numbeams only when non None #12899 (@cchen-dialpad) FLAX Minor fixes in CLM example #12914 (@stefan-it) Correct validationₛplitₚercentage argument from int (ex: 5) to float (0. 05) #12897 (@Elysium1436) Fix typo in the example of MobileBertForPreTraining #12919 (@buddhics) Add option to set maxₗen in runₙer #12929 (@sgugger) Fix QA examples for roberta tokenizer #12928 (@sgugger) Print defaults when using --help for scripts #12930 (@sgugger) Fix StoppingCriteria ABC signature #12918 (@willfrey) Add missing @classmethod decorators #12927 (@willfrey) fix distiller. py #12910 (@chutaklee) Update generationₗogitsₚrocess. py #12901 (@willfrey) Update generationₗogitsₚrocess. py #12900 (@willfrey) Update tokenizationₐuto. py #12896 (@willfrey) Fix docstring typo in tokenizationₐuto. py #12891 (@willfrey) Flax Correctly Add MT5 #12988 (@patrickvonplaten) ONNX v2 raises an Exception when using PyTorch Trainer. evaluate () crash when using only tensorboardX #12963 (@aphedges) Fix typo in example of DPRReader #12954 (@tadejsv) Place BigBirdTokenizer in sentencepiece-only objects #12975 (@sgugger) fix typo in example/text-classification README #12974 (@fullyz) Fix template for inputs docstrings #12976 (@sgugger) fix Trainer. train (resumefromcheckpoint=False) is causing an exception #12981 (@PhilipMay) Cast logits from bf16 to fp32 at the end of TFT5 #12332 (@szutenberg) Update CANINE test #12453 (@NielsRogge) padₜoₘultipleₒf added to DataCollatorForWholeWordMask #12999 (@Aktsvigun) Flax Align jax flax device name #12987 (@patrickvonplaten) Flax Correct flax docs #12782 (@patrickvonplaten) T5: Create position related tensors directly on device instead of CPU #12846 (@armancohan) Skip ProphetNet test #12462 (@LysandreJik) Create perplexity. rst #13004 (@sashavor) GPT-Neo ONNX export #12911 (@michaelbenayoun) Update generate method - Fix floordivide warning #13013 (@nreimers) Flax Correct pt to flax conversion if from base to head #13006 (@patrickvonplaten) Flax T5 Speed up t5 training #13012 (@patrickvonplaten) FX submodule naming fix #13016 (@michaelbenayoun) T5 with past ONNX export #13014 (@michaelbenayoun) Fix ONNX test: Put smaller ALBERT model #13028 (@LysandreJik) Tpu tie weights #13030 (@sgugger) Use min version for huggingface-hub dependency #12961 (@lewtun) tfhub. de -> tfhub. dev #12565 (@abhishekkrthakur) Flax Refactor gpt2 & bert example docs #13024 (@patrickvonplaten) Add MBART to models exportable with ONNX #13049 (@LysandreJik) Add to ONNX docs #13048 (@LysandreJik) Fix small typo in M2M100 doc #13061 (@SaulLu) Add try-except for torchₛcatter #13040 (@JetRunner) docs: add HuggingArtists to community notebooks #13050 (@AlekseyKorshuk) Fix ModelOutput instantiation form dictionaries #13067 (@sgugger) Roll out the test fetcher on push tests #13055 (@sgugger) Fix fallback of testfetcher #13071 (@sgugger) Revert to all tests whil we debug what's wrong #13072 (@sgugger) Use original key for label in DataCollatorForTokenClassification #13057 (@ibraheem-moosa) Doctest Setup, quicktour and taskₛummary #13078 (@sgugger) Add VisualBERT demo notebook #12263 (@gchhablani) Install git #13091 (@LysandreJik) Fix classifier dropout in AlbertForMultipleChoice #13087 (@ibraheem-moosa) Doctests job #13088 (@LysandreJik) Fix VisualBert Embeddings #13017 (@gchhablani) Proper import for unittest. mock. patch #13085 (@sgugger) Reactive test fecthers on scheduled test with proper git install #13097 (@sgugger) Change a parameter name in FlaxBartForConditionalGeneration. decode () #13074 (@ydshieh) Flax/JAX Run jitted tests at every commit #13090 (@patrickvonplaten) Rely on huggingfaceₕub for common tools #13100 (@sgugger) FlaxCLIP allow passing params to image and text feature methods #13099 (@patil-suraj) Ci last fix #13103 (@sgugger) Improve type checker performance #13094 (@bschnurr) Fix VisualBERT docs #13106 (@gchhablani) Fix CircleCI nightly tests #13113 (@sgugger) Create py. typed #12893 (@willfrey) Fix flax gpt2 hidden states #13109 (@ydshieh) Moving fill-mask pipeline to new testing scheme #12943 (@Narsil) Fix omitted lazy import for xlm-prophetnet #13052 (@minwhoo) Fix classifier dropout in bertForMultipleChoice #13129 (@mandelbrot-walker) Fix frameworks table so it's alphabetical #13118 (@osanseviero) Feature Processing Sequence Remove duplicated code #13051 (@patrickvonplaten) Ci continue through smi failure #13140 (@LysandreJik) Fix missing seqₗen in electra model when inputsₑmbeds is used. #13128 (@sararb) Optimizes ByT5 tokenizer #13119 (@Narsil) Add splinter #12955 (@oriram) AutoFeatureExtractor Fix loading of local folders if config. json exists #13166 (@patrickvonplaten) Fix generation docstrings regarding inputᵢds=None #12823 (@jvamvas) Update namespaces inside torch. utils. data to the latest. #13167 (@qqaatw) Fix the loss calculation of ProphetNet #13132 (@StevenTang1998) Fix LUKE tests #13183 (@NielsRogge) Add min and max question length options to TapasTokenizer #12803 (@NielsRogge) SageMaker: Fix sagemaker DDP & metric logs #13181 (@philschmid) correcting group beam search function output score bug #13211 (@sourabh112) Change how "additionalₛpecialₜokens" argument in the ". fromₚretrained" method of the tokenizer is taken into account #13056 (@SaulLu) remove unwanted control-flow code from DeBERTa-V2 #13145 (@kamalkraj) Fix loadₜfweights alias. #13159 (@qqa
Building similarity graph...
Analyzing shared references across papers
Loading...
Wolf et al. (Thu,) studied this question.
www.synapsesocial.com/papers/6a085abd1e0fcf4a43e8bc67 — DOI: https://doi.org/10.5281/zenodo.5347031
Synapse has enriched 3 closely related papers on similar clinical questions. Consider them for comparative context:
Thomas Wolf
Lysandre Debut
Victor Sanh
Cornell University
New York University
Wuhan University
Building similarity graph...
Analyzing shared references across papers
Loading...