GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERT
GAN is a powerful way to generate data more pseudo data to train models. This paper shows how to integrate semi-supervised GAN with the most popular NLP pre-trained model, BERT.
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
0:00 - Intro
2:19 - Semi-supervised GANs
4:00 - Discriminator Loss Function
6:24 - Generator Loss Function
9:06 - GAN-BERT
11:59 - Unlabeled Real Examples
13:36 - Experiments
17:32 - Takeaways
GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples
Paper: https://www.aclweb.org/anthology/2020.acl-main.191/
Abstract
Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Nat- ural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of exam- ples. In many real scenarios, obtaining high- quality annotated data is expensive and time- consuming; in contrast, unlabeled examples characterizing the target task can be, in gen- eral, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi- Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that ex- tends the fine-tuning of BERT-like architec- tures with unlabeled data in a generative adver- sarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 an- notated examples), still obtaining good perfor- mances in several sentence classification tasks.
Code: https://github.com/crux82/ganbert
Видео GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERT канала Deep Learning Explainer
Connect
Linkedin https://www.linkedin.com/in/xue-yong-fu-955723a6/
Twitter https://twitter.com/home
Email edwindeeplearning@gmail.com
0:00 - Intro
2:19 - Semi-supervised GANs
4:00 - Discriminator Loss Function
6:24 - Generator Loss Function
9:06 - GAN-BERT
11:59 - Unlabeled Real Examples
13:36 - Experiments
17:32 - Takeaways
GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples
Paper: https://www.aclweb.org/anthology/2020.acl-main.191/
Abstract
Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Nat- ural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of exam- ples. In many real scenarios, obtaining high- quality annotated data is expensive and time- consuming; in contrast, unlabeled examples characterizing the target task can be, in gen- eral, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi- Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that ex- tends the fine-tuning of BERT-like architec- tures with unlabeled data in a generative adver- sarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 an- notated examples), still obtaining good perfor- mances in several sentence classification tasks.
Code: https://github.com/crux82/ganbert
Видео GAN BERT: Generative Adversarial Learning for Robust Text Classification (Paper Explained) #GANBERT канала Deep Learning Explainer
Показать
Комментарии отсутствуют
Информация о видео
6 августа 2020 г. 22:20:40
00:18:43
Другие видео канала
![Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI](https://i.ytimg.com/vi/9JpdAg6uMXs/default.jpg)
![Mixing BERT with Categorical and Numerical Features](https://i.ytimg.com/vi/NbbsVcs42jE/default.jpg)
![247 - Conditional GANs and their applications](https://i.ytimg.com/vi/W5NPlZzebO0/default.jpg)
![Text Classification | Sentiment Analysis with BERT using huggingface, PyTorch and Python Tutorial](https://i.ytimg.com/vi/8N-nM3QW7O0/default.jpg)
![I built the same model with TensorFlow and PyTorch | Which Framework is better?](https://i.ytimg.com/vi/ay1E1f8VqP8/default.jpg)
![Efficient One Pass End to End Entity Linking for Questions (Paper Explained)](https://i.ytimg.com/vi/eXN7Bu06RjI/default.jpg)
![258 - Semi-supervised learning with GANs](https://i.ytimg.com/vi/AP0A8PgYfro/default.jpg)
![XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://i.ytimg.com/vi/H5vpBCLo74U/default.jpg)
![A Friendly Introduction to Generative Adversarial Networks (GANs)](https://i.ytimg.com/vi/8L11aMN5KY8/default.jpg)
![Simple Deep Neural Networks for Text Classification](https://i.ytimg.com/vi/wNBaNhvL4pg/default.jpg)
![Sandwich Transformer: Improving Transformer Models by Reordering their Sublayers](https://i.ytimg.com/vi/EM8xFAjtZUQ/default.jpg)
![NLP - Text Preprocessing and Text Classification (using Python)](https://i.ytimg.com/vi/nxhCyeRR75Q/default.jpg)
![What is BERT? | Deep Learning Tutorial 46 (Tensorflow, Keras & Python)](https://i.ytimg.com/vi/7kLi8u2dJz0/default.jpg)
![Distilling Task Specific Knowledge from BERT into Simple Neural Networks (paper explained)](https://i.ytimg.com/vi/AKCPPvaz8tU/default.jpg)
![TransGAN: Two Transformers Can Make One Strong GAN (Machine Learning Research Paper Explained)](https://i.ytimg.com/vi/R5DiLFOMZrc/default.jpg)
![L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley](https://i.ytimg.com/vi/PXOhi6m09bA/default.jpg)
![The Math Behind Generative Adversarial Networks Clearly Explained!](https://i.ytimg.com/vi/Gib_kiXgnvA/default.jpg)
![WGAN implementation from scratch (with gradient penalty)](https://i.ytimg.com/vi/pG0QZ7OddX4/default.jpg)
![Understand the Math and Theory of GANs in ~ 10 minutes](https://i.ytimg.com/vi/J1aG12dLo4I/default.jpg)
![Multilabel text classification using only few labeled examples | GAN-BERT Explained](https://i.ytimg.com/vi/EYpQgOz8jY8/default.jpg)