[Own work] MM-SHAP to measure modality contributions
Today we present our own work on MM-SHAP which measures how much a multimodal encoder uses each modality. Ah, what is multimodality again? 👉 https://youtu.be/jReaoJWdO78
📜 Parcalabescu, Letitia, and Anette Frank. "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks." arXiv preprint arXiv:2212.08158 (2022). https://arxiv.org/abs/2212.08158
📺 VeLO trained optimizers: https://youtu.be/9a6PQJxzUpM
📺 Watermarking Large Language models: https://youtu.be/-vToUx5SDW4
📺 Paella text-to-image diffusion model: https://youtu.be/6zeLSANd41k
❓Check out our #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community
➡️ AI Coffee Break Merch! 🛍️ https://aicoffeebreak.creator-spring.com/
Outline:
00:00 Paper for ACL 2023 Toronto
00:24 Vision and Language Transformers
01:05 Unimodal collapse
02:46 MM-SHAP
04:21 Not all models use modalities to the same extent
06:02 Outro and Final words
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Edvard Grødem, Vignesh Valliappan, Mutual Information, Mike Ton
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Video editing: Nils Trost
Видео [Own work] MM-SHAP to measure modality contributions канала AI Coffee Break with Letitia
📜 Parcalabescu, Letitia, and Anette Frank. "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks." arXiv preprint arXiv:2212.08158 (2022). https://arxiv.org/abs/2212.08158
📺 VeLO trained optimizers: https://youtu.be/9a6PQJxzUpM
📺 Watermarking Large Language models: https://youtu.be/-vToUx5SDW4
📺 Paella text-to-image diffusion model: https://youtu.be/6zeLSANd41k
❓Check out our #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community
➡️ AI Coffee Break Merch! 🛍️ https://aicoffeebreak.creator-spring.com/
Outline:
00:00 Paper for ACL 2023 Toronto
00:24 Vision and Language Transformers
01:05 Unimodal collapse
02:46 MM-SHAP
04:21 Not all models use modalities to the same extent
06:02 Outro and Final words
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Edvard Grødem, Vignesh Valliappan, Mutual Information, Mike Ton
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Video editing: Nils Trost
Видео [Own work] MM-SHAP to measure modality contributions канала AI Coffee Break with Letitia
Показать
Комментарии отсутствуют
Информация о видео
18 июня 2023 г. 16:38:34
00:06:55
Другие видео канала
Deep Learning for Symbolic Mathematics!? | Paper EXPLAINED[Quiz] Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder[RANT] Adversarial attack on OpenAI’s CLIP? Are we the fools or the foolers?Our paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR WorkshopData BAD | What Will it Take to Fix Benchmarking for NLU?Preparing for Virtual Conferences – 7 Tips for recording a good conference talkCan a neural network tell if an image is mirrored? – Visual ChiralityAI Coffee Break - Channel Trailer[Quiz] Interpretable ML, VQ-VAE w/o Quantization / infinite codebook, Pearson’s, PointCloudsWhat is the model identifiability problem? | Explained in 60 seconds! | ❓ #AICoffeeBreakQuiz #ShortsAdding vs. concatenating positional embeddings & Learned positional encodingsGaLore EXPLAINED: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionTransformer in Transformer: Paper explained and visualized | TNTTraining learned optimizers: VeLO paper EXPLAINEDPre-training of BERT-based Transformer architectures explained – language and vision!What is tokenization and how does it work? Tokenizers explained.[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularizationAdversarial Machine Learning explained! | With examples.Are Pre-trained Convolutions Better than Pre-trained Transformers? – Paper ExplainedDo Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuizFNet: Mixing Tokens with Fourier Transforms – Paper Explained