BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Paper Explained)
This video explains a legendary paper, BERT. It leverages the Transformer encoder and comes up with an innovative way to pre-training language models (masked language modeling). BERT has a significant influence on how people approach NLP problems and inspires a lot of following studies and BERT variants.
0:00 - Intro
1:32 - Transformer v.s LSTMs
3:34 - Pre-BERT times
8:22 - Model architecture
9:46 - WordPiece embeddings
14:25 - Special tokens
16:42 - Input representations
18:15 - Masked language modeling
20:03 - Mismatch between pre-training and fine-tuning
23:21 - Next sentence prediction
26:28 - Pre-training data
30:57 - end-to-end fine-tuning
34:45 - SQaUD
36:57 - Ablation over pre-training tasks
41:37 - Ablation over model size
43:17 - Feature-based approach with BERT
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
Related Videos:
Transformer explained
https://youtu.be/ELTGIye424E
Introduction of GPT-3: The Most Powerful Language Model Ever
https://youtu.be/Rv5SeM7LxLQ
Paper
https://arxiv.org/abs/1810.04805
Code
https://github.com/google-research/bert (TensorFlow)
https://github.com/huggingface/transformers (PyTorch)
Abstract
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Видео BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Paper Explained) канала Deep Learning Explainer
0:00 - Intro
1:32 - Transformer v.s LSTMs
3:34 - Pre-BERT times
8:22 - Model architecture
9:46 - WordPiece embeddings
14:25 - Special tokens
16:42 - Input representations
18:15 - Masked language modeling
20:03 - Mismatch between pre-training and fine-tuning
23:21 - Next sentence prediction
26:28 - Pre-training data
30:57 - end-to-end fine-tuning
34:45 - SQaUD
36:57 - Ablation over pre-training tasks
41:37 - Ablation over model size
43:17 - Feature-based approach with BERT
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
Related Videos:
Transformer explained
https://youtu.be/ELTGIye424E
Introduction of GPT-3: The Most Powerful Language Model Ever
https://youtu.be/Rv5SeM7LxLQ
Paper
https://arxiv.org/abs/1810.04805
Code
https://github.com/google-research/bert (TensorFlow)
https://github.com/huggingface/transformers (PyTorch)
Abstract
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Видео BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Paper Explained) канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Информация о видео
19 октября 2020 г. 18:58:54
00:50:22
Другие видео канала
NLP for Developers: BERT | RasaSolving NLP Problems in 2021 (BERT, N-gram, Embedding, LSTM, GRU, Self-Attention, Transformer) 🔥🔥🔥🔥What is GPT - Improving Language Understandingby Generative Pre-Training (paper explained)Reading minds through body language | Lynne Franklin | TEDxNapervilleREALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQABERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (AI Paper Summary)The language of lying — Noah ZandanFine-tuning language models for spanish NLP tasks by Álvaro Barbero JiménezTutorial 1-Transformer And Bert Implementation With HuggingfaceBuilding an entity extraction model using BERTXLNet: Generalized Autoregressive Pretraining for Language UnderstandingColin Raffel: Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerEfficient One Pass End to End Entity Linking for Questions (Paper Explained)Transformer Neural Networks - EXPLAINED! (Attention is all you need)BERT Research - Ep. 1 - Key Concepts & SourcesIllustrated Guide to Recurrent Neural Networks: Understanding the IntuitionLinkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking ModelBERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingApplying BERT to Question Answering (SQuAD v1.1)C5W3L07 Attention Model Intuition