Загрузка страницы

Data BAD | What Will it Take to Fix Benchmarking for NLU?

The Coffee Bean explains and comments the sobering take of the paper "What Will it Take to Fix Benchmarking in Natural Language Understanding?"

See more videos from Ms. Coffee Bean about natural language understanding:
📺 The road to NLU: https://youtube.com/playlist?list=PLpZBeKTZRGPMjF-Ob-NYjaTtewbMNXKcU

► Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
donor, Dres. Trost GbR, Yannik Schneider
➡️ AI Coffee Break Merch! 🛍️ https://aicoffeebreak.creator-spring.com/

Paper:
📜 Bowman, Samuel R., and George E. Dahl. "What Will it Take to Fix Benchmarking in Natural Language Understanding?." arXiv preprint arXiv:2104.02145 (2021). https://arxiv.org/abs/2104.02145

🔗 SuperGLUE: https://super.gluebenchmark.com/tasks
🔗 WiC: The Word-in-Context Dataset (English): https://pilehvar.github.io/wic/

Outline:
00:00 NLU Benchmarking – Motivation
01:04 How to measure NLU advances?
02:31 Why is NLU benchmarking broken?
04:43 What are the fixes?
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak

#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research​

Видео Data BAD | What Will it Take to Fix Benchmarking for NLU? канала AI Coffee Break with Letitia
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 октября 2021 г. 17:00:12
00:12:56
Другие видео канала
Deep Learning for Symbolic Mathematics!? | Paper EXPLAINEDDeep Learning for Symbolic Mathematics!? | Paper EXPLAINED[Quiz] Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder[Quiz] Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder[RANT] Adversarial attack on OpenAI’s CLIP? Are we the fools or the foolers?[RANT] Adversarial attack on OpenAI’s CLIP? Are we the fools or the foolers?Our paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR WorkshopOur paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR WorkshopPreparing for Virtual Conferences – 7 Tips for recording a good conference talkPreparing for Virtual Conferences – 7 Tips for recording a good conference talkCan a neural network tell if an image is mirrored? – Visual ChiralityCan a neural network tell if an image is mirrored? – Visual ChiralityAI Coffee Break - Channel TrailerAI Coffee Break - Channel Trailer[Quiz] Interpretable ML, VQ-VAE w/o Quantization / infinite codebook, Pearson’s, PointClouds[Quiz] Interpretable ML, VQ-VAE w/o Quantization / infinite codebook, Pearson’s, PointCloudsWhat is the model identifiability problem? | Explained in 60 seconds! | ❓ #AICoffeeBreakQuiz #ShortsWhat is the model identifiability problem? | Explained in 60 seconds! | ❓ #AICoffeeBreakQuiz #ShortsAdding vs. concatenating positional embeddings & Learned positional encodingsAdding vs. concatenating positional embeddings & Learned positional encodingsGaLore EXPLAINED: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionGaLore EXPLAINED: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionTransformer in Transformer: Paper explained and visualized | TNTTransformer in Transformer: Paper explained and visualized | TNTTraining learned optimizers: VeLO paper EXPLAINEDTraining learned optimizers: VeLO paper EXPLAINEDPre-training of BERT-based Transformer architectures explained – language and vision!Pre-training of BERT-based Transformer architectures explained – language and vision!What is tokenization and how does it work? Tokenizers explained.What is tokenization and how does it work? Tokenizers explained.[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularization[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularizationAdversarial Machine Learning explained! | With examples.Adversarial Machine Learning explained! | With examples.Are Pre-trained Convolutions Better than Pre-trained Transformers? – Paper ExplainedAre Pre-trained Convolutions Better than Pre-trained Transformers? – Paper ExplainedDo Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuizDo Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuizFNet: Mixing Tokens with Fourier Transforms – Paper ExplainedFNet: Mixing Tokens with Fourier Transforms – Paper Explained
Яндекс.Метрика