Загрузка страницы

Question and Answer Test-Train Overlap in Open Domain Question Answering Datasets

What happens when your test and training datasets overlap too much? How bad does it affect model generalization? This paper provides an empirical measurement of its impact.

Open-domain question answering is a popular research area, There are a lot of strong performing models that can achieve human-level performance in this domain. However, there seems to be a problem with those frequently used research datasets. This paper explores the test-train data overlap problem in three most popular open-domain QA datasets, and measures how much it inflates model performance.
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com

0:00 - Intro
2:10 - Open domain question answering
3:39 - Datasets
5:27 - Random splitting
10:10 - Question overlap
12:48 - Statistics of overlap
13:52 - Implications for modelling
15:39 - Question memorization
17:23 - Simple duplicates
17:41- Sophisticate duplicates
19:31 - Experiments
23:57 - Nearest neighbor models
28:51 - Summary

Learn more about open-domain question answering
https://youtu.be/JQ-bxQT5Qsw

Paper:
Question and Answer Test-Train Overlap in Open Domain Question Answering Datasets
https://arxiv.org/abs/2008.02637

Code:
https://github.com/facebookresearch/qa-overlap

Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Видео Question and Answer Test-Train Overlap in Open Domain Question Answering Datasets канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
31 августа 2020 г. 4:02:30
00:30:44
Другие видео канала
ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!ChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AIChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AICan Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3  (part 3)Can Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3 (part 3)Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners  #GPT3 (part 2)What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2)Wav2CLIP: Connecting Text, Images, and AudioWav2CLIP: Connecting Text, Images, and AudioMultitask Prompted Training Enables Zero-shot Task Generalization (Explained)Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)Magical Way of Self-Training and Task Augmentation for NLP ModelsMagical Way of Self-Training and Task Augmentation for NLP ModelsWell read Students Learn Better: On The Importance Of Pre-training Compact ModelsWell read Students Learn Better: On The Importance Of Pre-training Compact ModelsPre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)Vokenization Improving Language Understanding with Visual Grounded Supervision  (Paper Explained)Vokenization Improving Language Understanding with Visual Grounded Supervision (Paper Explained)Sandwich Transformer: Improving Transformer Models by Reordering their SublayersSandwich Transformer: Improving Transformer Models by Reordering their SublayersToo many papers to read? Try TLDR - Extreme Summarization of Scientific DocumentsToo many papers to read? Try TLDR - Extreme Summarization of Scientific DocumentsREALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQAREALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQATeach Computers to Connect Videos and Text without Labeled Data - VideoClipTeach Computers to Connect Videos and Text without Labeled Data - VideoClipBART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)BART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERTGAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERTRevealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper ExplainedRevealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper ExplainedTransformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTaTransformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTaLinkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking ModelLinkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking Model
Яндекс.Метрика