Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention
Professor Christopher Manning, Stanford University, Ashish Vaswani & Anna Huang, Google
http://onlinehub.stanford.edu/
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs224n/index.html#schedule
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html
To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu
Видео Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention канала stanfordonline
http://onlinehub.stanford.edu/
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs224n/index.html#schedule
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html
To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu
Видео Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention канала stanfordonline
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Лекция: Архитектуры NLP, Transformer, Bert. (10.04.2020)Vision Transformer (ViT) - An image is worth 16x16 words | Paper ExplainedTransformer Neural Networks - EXPLAINED! (Attention is all you need)Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 15 – Natural Language GenerationCS480/680 Lecture 19: Attention and Transformer NetworksStanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 13 – Contextual Word EmbeddingsStanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 8 – Translation, Seq2Seq, AttentionColin Raffel: Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerStanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 7 – Vanishing Gradients, Fancy RNNsAttention is all you need; Attentional Neural Network Models | Łukasz Kaiser | MasterclassAttention Is All You NeedLecture 12.1 Self-attentionAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)Language Learning with BERT - TensorFlow and Deep Learning SingaporeThe Narrated Transformer Language ModelAI Language Models & Transformers - ComputerphileStanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 6 – Language Models and RNNsBERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingStanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 12 – Subword Models