XLNet: Generalized Autoregressive Pretraining for Language Understanding
Abstract:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le
https://arxiv.org/abs/1906.08237
Видео XLNet: Generalized Autoregressive Pretraining for Language Understanding канала Yannic Kilcher
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le
https://arxiv.org/abs/1906.08237
Видео XLNet: Generalized Autoregressive Pretraining for Language Understanding канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
RoBERTa: A Robustly Optimized BERT Pretraining Approach[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber BlogVariational AutoencodersInconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZeroGPT-3: Language Models are Few-Shot Learners (Paper Explained)The Arabic Language: Its Amazing History and Features[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair datasetCURL: Contrastive Unsupervised Representations for Reinforcement LearningThe Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksBig Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)The Hungarian Language: Magyar nyelvBig Bird: Transformers for Longer Sequences (Paper Explained)BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingMachine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)Evolving Normalization-Activation LayersSinGAN: Learning a Generative Model from a Single Natural ImageAttention Is All You NeedNLP历史突破!快速解读Google BERT模型 + Word Embedding