Загрузка страницы

Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents

There are over 300 deep learning papers published every day. I find it very hard to keep up with. This paper introduces a cool way to summarize papers to extremely short summaries (TLDRs). More interestingly, Semantic Scholar uses the proposed method to build its TLDR feature and it's available in beta for nearly 10 million papers now!

0:00 - Too many papers
1:33 - What's special about this paper
2:25 - Sci-TLDR
5:16 - Controlled abstraction for TLDRs with title scaffolding
9:04 - During training
9:51 - Extractive summarization baselines
10:46 - Abstractive summarization baselines
11:14 - Input space
12:59 - Oracle
13:50 - ROGUE metrics
14:35 - Experiment results
17:25 - Model generated examples
17:59 - Demo - real-word application

Code and data:
TLDR Extreme Summarization of Scientific Documents
 https://github.com/allenai/scitldr

TLDR feature in Semantic Scholar: https://tldr.semanticscholar.org/

Abstract
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations.

Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com

Видео Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
23 ноября 2020 г. 4:04:39
00:21:15
Другие видео канала
ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!ChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AIChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AICan Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3  (part 3)Can Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3 (part 3)Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners  #GPT3 (part 2)What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2)Question and Answer Test-Train Overlap in Open Domain Question Answering DatasetsQuestion and Answer Test-Train Overlap in Open Domain Question Answering DatasetsWav2CLIP: Connecting Text, Images, and AudioWav2CLIP: Connecting Text, Images, and AudioMultitask Prompted Training Enables Zero-shot Task Generalization (Explained)Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)Magical Way of Self-Training and Task Augmentation for NLP ModelsMagical Way of Self-Training and Task Augmentation for NLP ModelsWell read Students Learn Better: On The Importance Of Pre-training Compact ModelsWell read Students Learn Better: On The Importance Of Pre-training Compact ModelsPre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)Vokenization Improving Language Understanding with Visual Grounded Supervision  (Paper Explained)Vokenization Improving Language Understanding with Visual Grounded Supervision (Paper Explained)Sandwich Transformer: Improving Transformer Models by Reordering their SublayersSandwich Transformer: Improving Transformer Models by Reordering their SublayersREALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQAREALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQATeach Computers to Connect Videos and Text without Labeled Data - VideoClipTeach Computers to Connect Videos and Text without Labeled Data - VideoClipBART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)BART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERTGAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERTRevealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper ExplainedRevealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper ExplainedTransformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTaTransformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTaLinkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking ModelLinkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking Model
Яндекс.Метрика