Lecture 1.2: Regression, classification and loss functions
slides: dlvu.github.io
In the first lecture, we start by reviewing the basics. We expect you know these already, but it helps to review it, and to show what names and notation we use for things.
The second video of the lecture shows how we do classification and regression with neural networks, and what loss functions we use. While neural networks aren't that popular for classical machine learning, these loss functions are very important to understand.
We also touch on the maximum likelihood principle, which is the principle from which many of our loss functions are derived.
lecturer: Peter Bloem
Видео Lecture 1.2: Regression, classification and loss functions канала DLVU
In the first lecture, we start by reviewing the basics. We expect you know these already, but it helps to review it, and to show what names and notation we use for things.
The second video of the lecture shows how we do classification and regression with neural networks, and what loss functions we use. While neural networks aren't that popular for classical machine learning, these loss functions are very important to understand.
We also touch on the maximum likelihood principle, which is the principle from which many of our loss functions are derived.
lecturer: Peter Bloem
Видео Lecture 1.2: Regression, classification and loss functions канала DLVU
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![[archived] Lecture 1.2: A quick intro to AI](https://i.ytimg.com/vi/i7-nhWSFsZ8/default.jpg)
![Lecture 4.4 The bag of tricks](https://i.ytimg.com/vi/mX92C0s0q1Y/default.jpg)
![Lecture 4.2 Why does deep Learning work? (DLVU)](https://i.ytimg.com/vi/ixI83iX7TV4/default.jpg)
![Lecture 11.3: World Models](https://i.ytimg.com/vi/zNCq1r4qI4Q/default.jpg)
![Lecture 6.3: variational autoencoders](https://i.ytimg.com/vi/icOK6hwttCA/default.jpg)
![Lecture 1.1: Neural networks](https://i.ytimg.com/vi/MrZvXcwQJdg/default.jpg)
![[OLD] Lecture 2.4: Automatic Differentiation (DLVU)](https://i.ytimg.com/vi/UpLtbV4L6PI/default.jpg)
![Lecture 7.2 Implicit models: GANs](https://i.ytimg.com/vi/Ydk-GqUMQQM/default.jpg)
![[OLD] Lecture 2.2: Backpropagation, scalar perspective (DLVU)](https://i.ytimg.com/vi/7mTcWrnexkk/default.jpg)
![Lecture 5.2 Recurrent Neural Networks](https://i.ytimg.com/vi/2JGlmBhQedk/default.jpg)
![Lecture 8.1b: Introduction - Embeddings](https://i.ytimg.com/vi/Q70zKCbfKyk/default.jpg)
![Lecture 2.1: A Review of Neural Networks (DLVU)](https://i.ytimg.com/vi/COhjLwjEpGM/default.jpg)
![Lecture 8.2: Graph and node embedding](https://i.ytimg.com/vi/kClCCEheI3o/default.jpg)
![Lecture 6.2: Latent variable model](https://i.ytimg.com/vi/0HfQ_OYlCjw/default.jpg)
![Lecture 9.3: Gradient Estimation](https://i.ytimg.com/vi/PikByfX0p80/default.jpg)
![Lecture 8.4: Application - query embedding](https://i.ytimg.com/vi/7m07Pr7NiV0/default.jpg)
![Lecture 4.1 General Deep Learning practice](https://i.ytimg.com/vi/EE5jTGP7wrM/default.jpg)
![Lecture 10.2: ARM & Flows](https://i.ytimg.com/vi/d_h6kY0s9yI/default.jpg)
![Lecture 10.1: ARM & Flows](https://i.ytimg.com/vi/_VPnu55UMCk/default.jpg)
![Lecture 12.4 Scaling up (Mixed precision, Data-parallelism, FSDP)](https://i.ytimg.com/vi/0qoUqE695X0/default.jpg)