Lecture 6 part 1: ADMM (basic definitions and properties)
This is Lecture 6- part 1 - of the KTH-EP3260 Fundamentals of Machine Learning over Networks (MLoNs), lectured by Euhanna Ghadimi. This lecture reviews the basics and recent advances of the alternating direction method of multipliers (ADMM) for large-scale machine learning problems. In particular, this lecture covers fundamentals of dual ascent, dual decomposition, proximal methods, augmented Lagrangian, ADMM, convergence results, and hyperparameter tuning.
Slides are available at the course website:
https://sites.google.com/view/mlons/course-materials
Видео Lecture 6 part 1: ADMM (basic definitions and properties) канала MLRG KTH
Slides are available at the course website:
https://sites.google.com/view/mlons/course-materials
Видео Lecture 6 part 1: ADMM (basic definitions and properties) канала MLRG KTH
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Lecture 1 part 2: Introduction](https://i.ytimg.com/vi/Cs13xswaEpQ/default.jpg)
![Wireless for Machine Learning - Tutorial](https://i.ytimg.com/vi/K0G3oEUk__I/default.jpg)
![Panel Discussion](https://i.ytimg.com/vi/9TSuzffoxms/default.jpg)
![Lecture 3 part 2: Centralized Convex ML (part 2: stochastic algorithms)](https://i.ytimg.com/vi/N8o_fvLkeYc/default.jpg)
![Lecture 4 part 1: Centralized Nonconvex ML (basic definitions and special structures)](https://i.ytimg.com/vi/Xa0m1NGhw4E/default.jpg)
![Seminar 13: Regret analysis for sequential decision making and online , part 1learning](https://i.ytimg.com/vi/0VF6WeWMVLA/default.jpg)
![Lecture 8 part 2: Deep Neural Networks](https://i.ytimg.com/vi/e9RuCIeCju8/default.jpg)
![Sindri Magnússon: On the convergence limited communication gradient methods](https://i.ytimg.com/vi/Y7_UaaciDDs/default.jpg)
![Lecture 7 part 2: Communication efficiency (general graph)](https://i.ytimg.com/vi/9qgnmQOR5Oc/default.jpg)
![Seminar 6: Deep scattering transforms](https://i.ytimg.com/vi/DEw9Qu-jhNo/default.jpg)
![Pascal Bianchi: A dynamical system viewpoint on stochastic approximation methods](https://i.ytimg.com/vi/MkGsLFyKtm8/default.jpg)
![Seminar 9: Coordinate descent optimization methods](https://i.ytimg.com/vi/btIwbqPFrJc/default.jpg)
![Lecture 8 part 1: Deep Neural Networks](https://i.ytimg.com/vi/zwuNVK4h0uo/default.jpg)
![Lecture 2 part 2: Centralized Convex ML (part 1: deterministic algorithms)](https://i.ytimg.com/vi/wCTUr_v-Mbo/default.jpg)
![Lecture 7 part 1: Communication efficiency (master-worker architecture)](https://i.ytimg.com/vi/i_DH9K4DjDk/default.jpg)
![Seminar 10: Fundamentals of deep neural networks](https://i.ytimg.com/vi/tsCWJLxZngE/default.jpg)
![Lecture 3 part 1: Centralized Convex ML (part 2: stochastic algorithms)](https://i.ytimg.com/vi/s4Ec1-_uoGQ/default.jpg)
![Seminar 1: Overview of ML techniques](https://i.ytimg.com/vi/fQ7w4-zI6zY/default.jpg)
![Hadi Ghauch: Large-scale training for deep neural networks](https://i.ytimg.com/vi/TdaHCXxeEhI/default.jpg)
![Seminar 2: PAC learnability in finite and infinite hypothesis spaces](https://i.ytimg.com/vi/UZv1uJrNZrc/default.jpg)
![Seminar 3: On the approximation capabilities of neural networks](https://i.ytimg.com/vi/P2-fbA1h3b0/default.jpg)