Загрузка страницы

Machine Translation for a 1000 languages – Paper explained

We explain the new polyglot model from Google Research that can translate between 1,000 languages! No need to read the long research paper yourself, because here we explain and summarize it on a high level.

SPONSOR: Weights & Biases 👉 https://wandb.me/ai-coffee-break

Check out our daily #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community
➡️ AI Coffee Break Merch! 🛍️ https://aicoffeebreak.creator-spring.com/

Paper 📜: Bapna, Ankur, Isaac Caswell, Julia Kreutzer, Orhan Firat, Daan van Esch, Aditya Siddhant, Mengmeng Niu et al. "Building Machine Translation Systems for the Next Thousand Languages." arXiv preprint arXiv:2205.03983 (2022). https://arxiv.org/abs/2205.03983
🔗 Facebook’s response: 200 language machine translation and it’s open source: https://arxiv.org/abs/2207.04672

Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Don Rosenthal, Dres. Trost GbR, banana.dev -- Kyle Morris, Julián Salazar, Edvard Grødem, Vignesh Valliappan, Kevin Tsai, Mutual Information, Mike Ton

Outline:
00:00 Machine translation for a 1000 languages
00:42 Weights&Biases (Sponsor)
02:00 Problems with many languages
04:15 Collecting data for 1k languages
11:46 Building MT models
14:13 Results on a thousand languages

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak

#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research​
Video editing: Nils Trost

Видео Machine Translation for a 1000 languages – Paper explained канала AI Coffee Break with Letitia
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
18 июля 2022 г. 16:48:40
00:17:39
Другие видео канала
Deep Learning for Symbolic Mathematics!? | Paper EXPLAINEDDeep Learning for Symbolic Mathematics!? | Paper EXPLAINED[Quiz] Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder[Quiz] Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder[RANT] Adversarial attack on OpenAI’s CLIP? Are we the fools or the foolers?[RANT] Adversarial attack on OpenAI’s CLIP? Are we the fools or the foolers?Our paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR WorkshopOur paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR WorkshopData BAD | What Will it Take to Fix Benchmarking for NLU?Data BAD | What Will it Take to Fix Benchmarking for NLU?Preparing for Virtual Conferences – 7 Tips for recording a good conference talkPreparing for Virtual Conferences – 7 Tips for recording a good conference talkCan a neural network tell if an image is mirrored? – Visual ChiralityCan a neural network tell if an image is mirrored? – Visual ChiralityAI Coffee Break - Channel TrailerAI Coffee Break - Channel Trailer[Quiz] Interpretable ML, VQ-VAE w/o Quantization / infinite codebook, Pearson’s, PointClouds[Quiz] Interpretable ML, VQ-VAE w/o Quantization / infinite codebook, Pearson’s, PointCloudsWhat is the model identifiability problem? | Explained in 60 seconds! | ❓ #AICoffeeBreakQuiz #ShortsWhat is the model identifiability problem? | Explained in 60 seconds! | ❓ #AICoffeeBreakQuiz #ShortsAdding vs. concatenating positional embeddings & Learned positional encodingsAdding vs. concatenating positional embeddings & Learned positional encodingsGaLore EXPLAINED: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionGaLore EXPLAINED: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionTransformer in Transformer: Paper explained and visualized | TNTTransformer in Transformer: Paper explained and visualized | TNTTraining learned optimizers: VeLO paper EXPLAINEDTraining learned optimizers: VeLO paper EXPLAINEDPre-training of BERT-based Transformer architectures explained – language and vision!Pre-training of BERT-based Transformer architectures explained – language and vision!What is tokenization and how does it work? Tokenizers explained.What is tokenization and how does it work? Tokenizers explained.[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularization[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularizationAdversarial Machine Learning explained! | With examples.Adversarial Machine Learning explained! | With examples.Are Pre-trained Convolutions Better than Pre-trained Transformers? – Paper ExplainedAre Pre-trained Convolutions Better than Pre-trained Transformers? – Paper ExplainedDo Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuizDo Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuizFNet: Mixing Tokens with Fourier Transforms – Paper ExplainedFNet: Mixing Tokens with Fourier Transforms – Paper Explained
Яндекс.Метрика