Lecture 10 | Recurrent Neural Networks
In Lecture 10 we discuss the use of recurrent neural networks for modeling sequence data. We show how recurrent neural networks can be used for language modeling and image captioning, and how soft spatial attention can be incorporated into image captioning models. We discuss different architectures for recurrent neural networks, including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU).
Keywords: Recurrent neural networks, RNN, language modeling, image captioning, soft attention, LSTM, GRU
Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture10.pdf
--------------------------------------------------------------------------------------
Convolutional Neural Networks for Visual Recognition
Instructors:
Fei-Fei Li: http://vision.stanford.edu/feifeili/
Justin Johnson: http://cs.stanford.edu/people/jcjohns/
Serena Yeung: http://ai.stanford.edu/~syyeung/
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.
Website:
http://cs231n.stanford.edu/
For additional learning opportunities please visit:
http://online.stanford.edu/
Видео Lecture 10 | Recurrent Neural Networks канала Stanford University School of Engineering
Keywords: Recurrent neural networks, RNN, language modeling, image captioning, soft attention, LSTM, GRU
Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture10.pdf
--------------------------------------------------------------------------------------
Convolutional Neural Networks for Visual Recognition
Instructors:
Fei-Fei Li: http://vision.stanford.edu/feifeili/
Justin Johnson: http://cs.stanford.edu/people/jcjohns/
Serena Yeung: http://ai.stanford.edu/~syyeung/
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.
Website:
http://cs231n.stanford.edu/
For additional learning opportunities please visit:
http://online.stanford.edu/
Видео Lecture 10 | Recurrent Neural Networks канала Stanford University School of Engineering
Показать
Комментарии отсутствуют
Информация о видео
11 августа 2017 г. 22:02:58
01:13:09
Другие видео канала
Lecture 11 | Detection and SegmentationMIT 6.S191 (2020): Recurrent Neural NetworksLecture 7 | Training Neural Networks IIRecurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)Lecture 13 | Generative ModelsCS231n Winter 2016: Lecture 10: Recurrent Neural Networks, Image Captioning, LSTM18- Long Short Term Memory (LSTM) Networks Explained EasilyRecurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorflow Tutorial | EdurekaLecture 1 | Introduction to Convolutional Neural Networks for Visual RecognitionLecture 12 | Visualizing and UnderstandingLecture 9 | CNN ArchitecturesLSTM is dead. Long Live Transformers!Lecture 5 | Convolutional Neural NetworksTime Series Forecasting Using Recurrent Neural Network and Vector Autoregressive Model: When and HowMIT 6.S094: Recurrent Neural Networks for Steering Through TimeDeep Learning: A Crash CourseRecurrent Neural Networks & Long Short-Term Memory - Andrej Karpathy, Research Scientist, OpenAIA friendly introduction to Recurrent Neural NetworksMIT 6.S191: Recurrent Neural NetworksLecture 14 | Deep Reinforcement Learning