Загрузка страницы

GPT-2: Language Models are Unsupervised Multitask Learners

A look at OpenAI's new GPT-2 model and the surrounding controversy.

https://blog.openai.com/better-language-models/

Abstract:
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Authors:
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever

Видео GPT-2: Language Models are Unsupervised Multitask Learners канала Yannic Kilcher
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
18 февраля 2019 г. 21:11:42
00:27:33
Другие видео канала
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)Reinforced Self-Training (ReST) for Language Modeling (Paper Explained)Reinforced Self-Training (ReST) for Language Modeling (Paper Explained)[ML News] GPT-4 solves MIT Exam with 100% ACCURACY | OpenLLaMA 13B released[ML News] GPT-4 solves MIT Exam with 100% ACCURACY | OpenLLaMA 13B releasedTree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust (Explained)Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust (Explained)RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Full Paper Review)Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Full Paper Review)OpenAI suggests AI licenses (US Senate hearing on AI regulation w/ Sam Altman)OpenAI suggests AI licenses (US Senate hearing on AI regulation w/ Sam Altman)[ML News] Geoff Hinton leaves Google | Google has NO MOAT | OpenAI down half a billion[ML News] Geoff Hinton leaves Google | Google has NO MOAT | OpenAI down half a billionScaling Transformer to 1M tokens and beyond with RMT (Paper Explained)Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)AI Alignment Livestream (aka OpenAssistant "Just Chatting")AI Alignment Livestream (aka OpenAssistant "Just Chatting")OpenAssistant First Models are here! (Open-Source ChatGPT)OpenAssistant First Models are here! (Open-Source ChatGPT)The biggest week in AI (GPT-4, Office Copilot, Google PaLM, Anthropic Claude & more)The biggest week in AI (GPT-4, Office Copilot, Google PaLM, Anthropic Claude & more)GPT-4 is here! What we know so far (Full Analysis)GPT-4 is here! What we know so far (Full Analysis)This ChatGPT Skill will earn you $10B (also, AI reads your mind!) | ML NewsThis ChatGPT Skill will earn you $10B (also, AI reads your mind!) | ML NewsLLaMA: Open and Efficient Foundation Language Models (Paper Explained)LLaMA: Open and Efficient Foundation Language Models (Paper Explained)Open Assistant Inference Backend Development (Hands-On Coding)Open Assistant Inference Backend Development (Hands-On Coding)OpenAssistant - ChatGPT's Open Alternative (We need your help!)OpenAssistant - ChatGPT's Open Alternative (We need your help!)Open Assistant Live Coding (Open-Source ChatGPT Replication)Open Assistant Live Coding (Open-Source ChatGPT Replication)AI Essay Competition (lab42)AI Essay Competition (lab42)Open Assistant Live Coding (Open-Source ChatGPT Replication)Open Assistant Live Coding (Open-Source ChatGPT Replication)
Яндекс.Метрика