Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents
There are over 300 deep learning papers published every day. I find it very hard to keep up with. This paper introduces a cool way to summarize papers to extremely short summaries (TLDRs). More interestingly, Semantic Scholar uses the proposed method to build its TLDR feature and it's available in beta for nearly 10 million papers now!
0:00 - Too many papers
1:33 - What's special about this paper
2:25 - Sci-TLDR
5:16 - Controlled abstraction for TLDRs with title scaffolding
9:04 - During training
9:51 - Extractive summarization baselines
10:46 - Abstractive summarization baselines
11:14 - Input space
12:59 - Oracle
13:50 - ROGUE metrics
14:35 - Experiment results
17:25 - Model generated examples
17:59 - Demo - real-word application
Code and data:
TLDR Extreme Summarization of Scientific Documents
https://github.com/allenai/scitldr
TLDR feature in Semantic Scholar: https://tldr.semanticscholar.org/
Abstract
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations.
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
Видео Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents канала Deep Learning Explainer
0:00 - Too many papers
1:33 - What's special about this paper
2:25 - Sci-TLDR
5:16 - Controlled abstraction for TLDRs with title scaffolding
9:04 - During training
9:51 - Extractive summarization baselines
10:46 - Abstractive summarization baselines
11:14 - Input space
12:59 - Oracle
13:50 - ROGUE metrics
14:35 - Experiment results
17:25 - Model generated examples
17:59 - Demo - real-word application
Code and data:
TLDR Extreme Summarization of Scientific Documents
https://github.com/allenai/scitldr
TLDR feature in Semantic Scholar: https://tldr.semanticscholar.org/
Abstract
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations.
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
Видео Too many papers to read? Try TLDR - Extreme Summarization of Scientific Documents канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![ChatGPTs Take Over a Town: 25 Agents Experience Love, Friendships, and Life!](https://i.ytimg.com/vi/9LzuqQkXEjo/default.jpg)
![ChatGPT Plugins, Github Copilot X, Bard, Bing Image Creator - Crazy Week for AI](https://i.ytimg.com/vi/VoF-iQDb2QE/default.jpg)
![Can Machines Learn Like Humans - In-context Learning\Meta\Zero-shot Learning | #GPT3 (part 3)](https://i.ytimg.com/vi/no5P_0ZYoOw/default.jpg)
![Introduction of GPT-3: The Most Powerful Language Model Ever - #GPT3 Explained Series (part 1)](https://i.ytimg.com/vi/Rv5SeM7LxLQ/default.jpg)
![What Is A Language Model? GPT-3: Language Models Are Few-Shot Learners #GPT3 (part 2)](https://i.ytimg.com/vi/Rp5IVlSYqgc/default.jpg)
![Question and Answer Test-Train Overlap in Open Domain Question Answering Datasets](https://i.ytimg.com/vi/Cb5sj4_Ztfo/default.jpg)
![Wav2CLIP: Connecting Text, Images, and Audio](https://i.ytimg.com/vi/FeKvpJKav5k/default.jpg)
![Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)](https://i.ytimg.com/vi/YToXXfrIu6w/default.jpg)
![Magical Way of Self-Training and Task Augmentation for NLP Models](https://i.ytimg.com/vi/0yriOQbNWmo/default.jpg)
![Well read Students Learn Better: On The Importance Of Pre-training Compact Models](https://i.ytimg.com/vi/LoyyKVJgHKo/default.jpg)
![Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning (Paper Explained)](https://i.ytimg.com/vi/Ijrdm0Nb_k0/default.jpg)
![Vokenization Improving Language Understanding with Visual Grounded Supervision (Paper Explained)](https://i.ytimg.com/vi/4T1u3Z2DaZA/default.jpg)
![Sandwich Transformer: Improving Transformer Models by Reordering their Sublayers](https://i.ytimg.com/vi/EM8xFAjtZUQ/default.jpg)
![REALM: Retrieval-Augmented Language Model Pre-training | Qpen Question Answering SOTA #OpenQA](https://i.ytimg.com/vi/JQ-bxQT5Qsw/default.jpg)
![Teach Computers to Connect Videos and Text without Labeled Data - VideoClip](https://i.ytimg.com/vi/vqMZjsIKUoQ/default.jpg)
![BART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)](https://i.ytimg.com/vi/MxNnl_gHV1Y/default.jpg)
![GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERT](https://i.ytimg.com/vi/vAQsGi6NctY/default.jpg)
![Revealing Dark Secrets of BERT (Analysis of BERT's Attention Heads) - Paper Explained](https://i.ytimg.com/vi/mnU9ILoDH68/default.jpg)
![Transformer Architecture Explained | Attention Is All You Need | Foundation of BERT, GPT-3, RoBERTa](https://i.ytimg.com/vi/ELTGIye424E/default.jpg)
![Linkedin's New Search Engine | DeText: A Deep Text Ranking Framework with BERT | Deep Ranking Model](https://i.ytimg.com/vi/Dd4Rw3t5QQk/default.jpg)