Pitfalls of Static Language Modelling | NLP Journal Club
Link: https://arxiv.org/abs/2102.01951
Abstract: Our world is open-ended, non-stationary and constantly evolving; thus what we talk about and how we talk about it changes over time. This inherent dynamic nature of language comes in stark contrast to the current static language modelling paradigm, which constructs training and evaluation sets from overlapping time periods. Despite recent progress, we demonstrate that state-of-the-art Transformer models perform worse in the realistic setup of predicting future utterances from beyond their training period -- a consistent pattern across three datasets from two domains. We find that, while increasing model size alone -- a key driver behind recent progress -- does not provide a solution for the temporal generalization problem, having models that continually update their knowledge with new information can indeed slow down the degradation over time. Hence, given the compilation of ever-larger language modelling training datasets, combined with the growing list of language-model-based NLP applications that require up-to-date knowledge about the world, we argue that now is the right time to rethink our static language modelling evaluation protocol, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.
Видео Pitfalls of Static Language Modelling | NLP Journal Club канала The NLP Lab
Abstract: Our world is open-ended, non-stationary and constantly evolving; thus what we talk about and how we talk about it changes over time. This inherent dynamic nature of language comes in stark contrast to the current static language modelling paradigm, which constructs training and evaluation sets from overlapping time periods. Despite recent progress, we demonstrate that state-of-the-art Transformer models perform worse in the realistic setup of predicting future utterances from beyond their training period -- a consistent pattern across three datasets from two domains. We find that, while increasing model size alone -- a key driver behind recent progress -- does not provide a solution for the temporal generalization problem, having models that continually update their knowledge with new information can indeed slow down the degradation over time. Hence, given the compilation of ever-larger language modelling training datasets, combined with the growing list of language-model-based NLP applications that require up-to-date knowledge about the world, we argue that now is the right time to rethink our static language modelling evaluation protocol, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.
Видео Pitfalls of Static Language Modelling | NLP Journal Club канала The NLP Lab
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![CTRLsum: Towards Generic Controllable Text Summarization](https://i.ytimg.com/vi/6DsvJiFverQ/default.jpg)
![Jukebox: A Generative Model for Music | NLP Journal Club](https://i.ytimg.com/vi/OlpEdLavHlc/default.jpg)
![Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks | NLP journal club](https://i.ytimg.com/vi/E2AcqHuqFuk/default.jpg)
![Memorizing Transformers](https://i.ytimg.com/vi/aWOkC9US7EY/default.jpg)
![Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval | NLP Journal Club](https://i.ytimg.com/vi/ltFe40taFqM/default.jpg)
![How Close is ChatGPT to Human Experts?](https://i.ytimg.com/vi/XkT9PN6zqpE/default.jpg)
![Best General-purpose NLP Libraries to Use in 2021](https://i.ytimg.com/vi/sFw1mJ4b5R0/default.jpg)
![Who will Win the Large Language Model App Race? with @Slator](https://i.ytimg.com/vi/7o1NWN2pmsQ/default.jpg)
![Learning to Reason and Memorize with Self-Notes | Paper summary](https://i.ytimg.com/vi/qT5kUc3vwzg/default.jpg)
![Falcon LLM: the Best Open-source LLM Available at the Moment](https://i.ytimg.com/vi/daU8q9eyzfY/default.jpg)
![Tree of Thoughts: Deliberate Problem Solving with Large Language Models | Paper summary](https://i.ytimg.com/vi/RndhsZvr-cI/default.jpg)
![When to use a large language model? 4 points to consider in 2023.](https://i.ytimg.com/vi/SR4LzhS52v0/default.jpg)
![QURIOUS: Question Generation Pretraining for Text Generation | NLP Journal Club](https://i.ytimg.com/vi/K36dW4uJ4ho/default.jpg)
![GreaseLM: Graph REASoning Enhanced Language Models for Question Answering](https://i.ytimg.com/vi/5woxVibfR4U/default.jpg)
![A Distributional Approach to Controlled Text Generation | NLP Journal Club](https://i.ytimg.com/vi/RJ9TT81i338/default.jpg)
![REALM: Retrieval-Augmented Language Model Pre-Training | NLP Journal Club](https://i.ytimg.com/vi/mFoEig-Xi_0/default.jpg)
![Multi-scale Transformer Language Models](https://i.ytimg.com/vi/SO37e1Ho-Y0/default.jpg)
![*Paper summary* ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models](https://i.ytimg.com/vi/JHOvr_SF5PI/default.jpg)
![Mirror-Generative Neural Machine Translation | NLP Journal Club](https://i.ytimg.com/vi/JGXFe3-FTYA/default.jpg)
![Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT | NLP Journal Club](https://i.ytimg.com/vi/EwKreP-pXAQ/default.jpg)