What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2)
This video explains what a language model is and how you can leverage one to boost your NLP system performance. It also walks you through the brief history of neural language models and how they're used in modern NLP systems. As GPT3 is also a language model, it's necessary to understand language models to better interpret GPT-3.
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
0:00 - Intro
1:13 - What is a language model
3:32 - N-gram v.s Context-aware
7:54 - Autoregressive v.s bidirectional
10:16 - history of neural language models
15:10 - How language models are used
GPT-3 Explained Series:
Introduction of GPT-3: The Most Powerful Language Model Ever (part1)
https://youtu.be/Rv5SeM7LxLQ
Language Models are Few-Shot Learners
https://arxiv.org/abs/2005.14165
Abstract
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
Видео What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2) канала Deep Learning Explainer
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
0:00 - Intro
1:13 - What is a language model
3:32 - N-gram v.s Context-aware
7:54 - Autoregressive v.s bidirectional
10:16 - history of neural language models
15:10 - How language models are used
GPT-3 Explained Series:
Introduction of GPT-3: The Most Powerful Language Model Ever (part1)
https://youtu.be/Rv5SeM7LxLQ
Language Models are Few-Shot Learners
https://arxiv.org/abs/2005.14165
Abstract
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
Видео What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2) канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Информация о видео
10 августа 2020 г. 3:30:13
00:19:53
Другие видео канала
![ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!](https://i.ytimg.com/vi/9LzuqQkXEjo/default.jpg)
![ChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AI](https://i.ytimg.com/vi/VoF-iQDb2QE/default.jpg)
![Can Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3 (part 3)](https://i.ytimg.com/vi/no5P_0ZYoOw/default.jpg)
![Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)](https://i.ytimg.com/vi/Rv5SeM7LxLQ/default.jpg)
![Question and Answer Test-Train Overlap in Open Domain Question Answering Datasets](https://i.ytimg.com/vi/Cb5sj4_Ztfo/default.jpg)
![Wav2CLIP: Connecting Text, Images, and Audio](https://i.ytimg.com/vi/FeKvpJKav5k/default.jpg)
![Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)](https://i.ytimg.com/vi/YToXXfrIu6w/default.jpg)
![Magical Way of Self-Training and Task Augmentation for NLP Models](https://i.ytimg.com/vi/0yriOQbNWmo/default.jpg)
![Well read Students Learn Better: On The Importance Of Pre-training Compact Models](https://i.ytimg.com/vi/LoyyKVJgHKo/default.jpg)
![Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)](https://i.ytimg.com/vi/Ijrdm0Nb_k0/default.jpg)
![Vokenization Improving Language Understanding with Visual Grounded Supervision (Paper Explained)](https://i.ytimg.com/vi/4T1u3Z2DaZA/default.jpg)
![Sandwich Transformer: Improving Transformer Models by Reordering their Sublayers](https://i.ytimg.com/vi/EM8xFAjtZUQ/default.jpg)
![Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents](https://i.ytimg.com/vi/wNSiWJxVGQ8/default.jpg)
![REALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQA](https://i.ytimg.com/vi/JQ-bxQT5Qsw/default.jpg)
![Teach Computers to Connect Videos and Text without Labeled Data - VideoClip](https://i.ytimg.com/vi/vqMZjsIKUoQ/default.jpg)
![BART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)](https://i.ytimg.com/vi/MxNnl_gHV1Y/default.jpg)
![GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERT](https://i.ytimg.com/vi/vAQsGi6NctY/default.jpg)
![Revealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper Explained](https://i.ytimg.com/vi/mnU9ILoDH68/default.jpg)
![Transformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTa](https://i.ytimg.com/vi/ELTGIye424E/default.jpg)
![Linkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking Model](https://i.ytimg.com/vi/Dd4Rw3t5QQk/default.jpg)