[BERT] Pretranied Deep Bidirectional Transformers for Language Understanding (algorithm) | TDLS
Toronto Deep Learning Series
Host: Ada + @ML Explained - Aggregate Intellect - AI.SCIENCE
Date: Nov 6th, 2018
Aggregate Intellect is a Global Marketplace where ML Developers Connect, Collaborate, and Build.
-Connect with peers & experts at https://ai.science
-Join our Slack Community: https://join.slack.com/t/aisc-to/shared_invite/zt-f5zq5l35-PSIJTFk4v60FML177PgsPg
-Check out the user generated Recipes that provide step by step, and bite sized guides on how to do various tasks: https://ai.science/recipes
For details including slides, visit https://aisc.ai.science/events/2018-11-06
Paper: https://arxiv.org/abs/1810.04805
Speaker: Danny Luo (Dessa)
https://dluo.me/
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Видео [BERT] Pretranied Deep Bidirectional Transformers for Language Understanding (algorithm) | TDLS канала ML Explained - Aggregate Intellect - AI.SCIENCE
Host: Ada + @ML Explained - Aggregate Intellect - AI.SCIENCE
Date: Nov 6th, 2018
Aggregate Intellect is a Global Marketplace where ML Developers Connect, Collaborate, and Build.
-Connect with peers & experts at https://ai.science
-Join our Slack Community: https://join.slack.com/t/aisc-to/shared_invite/zt-f5zq5l35-PSIJTFk4v60FML177PgsPg
-Check out the user generated Recipes that provide step by step, and bite sized guides on how to do various tasks: https://ai.science/recipes
For details including slides, visit https://aisc.ai.science/events/2018-11-06
Paper: https://arxiv.org/abs/1810.04805
Speaker: Danny Luo (Dessa)
https://dluo.me/
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Видео [BERT] Pretranied Deep Bidirectional Transformers for Language Understanding (algorithm) | TDLS канала ML Explained - Aggregate Intellect - AI.SCIENCE
Показать
Комментарии отсутствуют
Информация о видео
28 ноября 2018 г. 11:58:26
00:53:07
Другие видео канала
[Transformer] Attention Is All You Need | AISC FoundationalBERT Neural Network - EXPLAINED!BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingAI Language Models & Transformers - ComputerphileNLP with BERT Transformers - EXPLAINED!XLNet: Generalized Autoregressive Pretraining for Language UnderstandingNLSea - Text Embedding with BERT & BERT Fine TuningTransformer Neural Networks - EXPLAINED! (Attention is all you need)Understanding Artificial Intelligence and Its Future | Neil Nie | TEDxDeerfieldTraining BERT Language Model From Scratch On TPUsKaggle Reading Group: Bidirectional Encoder Representations from Transformers (aka BERT) (Part 2)NLP | Deep dive en los mecanismos de atención y red neuronal transformers | La base de GPT-3 y BERTWill Self-Taught, A.I. Powered Robots Be the End of Us?Training a custom ENTITY LINKING model with spaCyCS480/680 Lecture 19: Attention and Transformer Networks[GAT] Graph Attention Networks | AISC Foundational[토크ON세미나] 자연어 언어모델 ‘BERT’ 1강 - 자연어 처리 (NLP) | T아카데미Using spaCy with Hugging Face Transformers | Matthew HonnibalLecture 1 - Stanford CS229: Machine Learning - Andrew Ng (Autumn 2018)