Week 14 – Lecture: Structured prediction with energy based models
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction. We first introduced the Energy-Based factor graph and efficient inference for it. Then we gave some examples for simple Energy-Based factor graphs with “shallow” factors. Finally, we discussed the Graph Transformer Net.
0:00:25 – Structured Prediction, Energy based factor graphs, Sequence Labeling
0:18:06 – Efficient Inference for Energy-Based Factor Graph and Some Simple Energy-Based Factor Graphs
0:43:30 – Graph Transformer Net
LECTURE Part B: http://bit.ly/pDL-en-14-2
The second leg of the lecture further discusses the application of graphical model methods to energy-based models. After spending some time comparing different loss functions, we discuss the application of the Viterbi algorithm and forward algorithm to graphical transformer networks. We then transition to discussing the Lagrangian formulation of backpropagation and then variational inference for energy-based models.
1:00:22 – Comparing Losses and the start of language models as graphs
1:15:18 – Forward algorithm in Graph Transformer Networks
1:32:53 – Lagrangian formulation of back prop and neural ODE
1:48:42 – Variational Inference in terms of Energy
Видео Week 14 – Lecture: Structured prediction with energy based models канала Alfredo Canziani
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction. We first introduced the Energy-Based factor graph and efficient inference for it. Then we gave some examples for simple Energy-Based factor graphs with “shallow” factors. Finally, we discussed the Graph Transformer Net.
0:00:25 – Structured Prediction, Energy based factor graphs, Sequence Labeling
0:18:06 – Efficient Inference for Energy-Based Factor Graph and Some Simple Energy-Based Factor Graphs
0:43:30 – Graph Transformer Net
LECTURE Part B: http://bit.ly/pDL-en-14-2
The second leg of the lecture further discusses the application of graphical model methods to energy-based models. After spending some time comparing different loss functions, we discuss the application of the Viterbi algorithm and forward algorithm to graphical transformer networks. We then transition to discussing the Lagrangian formulation of backpropagation and then variational inference for energy-based models.
1:00:22 – Comparing Losses and the start of language models as graphs
1:15:18 – Forward algorithm in Graph Transformer Networks
1:32:53 – Lagrangian formulation of back prop and neural ODE
1:48:42 – Variational Inference in terms of Energy
Видео Week 14 – Lecture: Structured prediction with energy based models канала Alfredo Canziani
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![14L – Lagrangian backpropagation, final project winners, and Q&A session](https://i.ytimg.com/vi/MJfnamMFylo/default.jpg)
![12 – Planning and control](https://i.ytimg.com/vi/wTg6qJlXkok/default.jpg)
![Practical 4.1 – RNN forward and backward](https://i.ytimg.com/vi/WwslsYQX77s/default.jpg)
![Week 15 – Practicum part B: Training latent variable energy based models (EBMs)](https://i.ytimg.com/vi/XLSb1Cs1Jao/default.jpg)
![Behind the scenes](https://i.ytimg.com/vi/gWCONOqMnS0/default.jpg)
![TeraDeep Image Parser](https://i.ytimg.com/vi/XVlh6z0M_RY/default.jpg)
![02 – Supervised learning / Classification](https://i.ytimg.com/vi/panJ-pkaqBQ/default.jpg)
![Person detector](https://i.ytimg.com/vi/oFHp6cntjBw/default.jpg)
![Practical 3.2 – CNN models](https://i.ytimg.com/vi/LYYwUr0vCjg/default.jpg)
![Week 9 – Practicum: (Energy-based) Generative adversarial networks](https://i.ytimg.com/vi/xYc11zyZ26M/default.jpg)
![[LIVE] Free energy gentle introduction](https://i.ytimg.com/vi/y6WNrHskm2E/default.jpg)
![Purdue theme](https://i.ytimg.com/vi/Z6If6zRqH94/default.jpg)
![08 – From LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder](https://i.ytimg.com/vi/PpcN-F7ovK0/default.jpg)
![06L – Latent variable EBMs for structured prediction](https://i.ytimg.com/vi/8u2s64ZtmiA/default.jpg)
![Why not?](https://i.ytimg.com/vi/E016-8yFXDs/default.jpg)
![Matrix multiplication, signals, and convolutions](https://i.ytimg.com/vi/d2GixptaHjk/default.jpg)
![Practical 3.3 – CNN training](https://i.ytimg.com/vi/kcOJEplX7i0/default.jpg)
![Goodbye to DL20,21,22,23 apartment](https://i.ytimg.com/vi/XKiCXOJiZSk/default.jpg)
![Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic](https://i.ytimg.com/vi/X2s7gy3wIYw/default.jpg)
![05 – Multi-class perceptron, binary and multi-class logistic regression](https://i.ytimg.com/vi/i31lQ18Gcbc/default.jpg)
![4 febbraio 2012 Molo Audace](https://i.ytimg.com/vi/ezD637hPCts/default.jpg)