Загрузка страницы

Neural Networks Explained from Scratch using Python

When I started learning Neural Networks from scratch a few years ago, I did not think about just looking at some Python code or similar. I found it quite hard to understand all the concepts behind Neural Networks (e.g. Bias, Backpropagation, ...). Now I know that it all looks quite more complicated when you see it written mathematically compared to looking at the code. In this video, I try to provide you an intuitive understanding through Python code and detailed animations. Hope it helps you :)

Code:
https://github.com/Bot-Academy/NeuralNetworkFromScratch

Find me on:
Patreon: https://www.patreon.com/botacademy
Discord: https://discord.gg/6fRE4DE
Twitter: https://twitter.com/bot_academy
Instagram: https://www.instagram.com/therealbotacademy/

Citation:
[1] https://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A489568

Additional Notes:
1. You might’ve seen that we haven’t used the variable e at all.
This is for two reasons. First, normally we would’ve used it to calculate ‘delta_o’,
but due to some tricks, it is not needed here. Second, it is sometimes helpful to print the average error during training to see if it decreases.

2. To see how it performs on images not seen during training, you could only use just the first 50000 images for training and then analyze the results on the remaining 10000 samples. I haven’t done it in this video for simplicity. The accuracy, however, shouldn’t change that much.

3. It seems like some people have a hard time understanding the shape lines [e.g. x.shape += (1,)]. So let me try to explain:

To create a 1-tuple in python we need to write x=(1,). If we would just write x=(1), it gets converted to the integer 1 in Python.

Numpy introduces the shape attribute for arrays. Because the shape of a matrix has to be represented by a tuple like (2, 5) or (2, 4, 7), it is a good idea to represent a vector as a 1-tuple instead of an integer for consistency. So it is (X,).

If we want to use this vector in a matrix multiplication with a matrix, it doesn't work because you can't matrix multiply a vector with a matrix in numpy. So we need to add this 'invisible' second dimension of size 1. The line basically adds a (1,) vector to the shape of the (X,) vector which results in a matrix of size (X, 1). That's also why it doesn't work with (2,) because that would require more values. For example (5,) and (5, 1) both contain 5 values while (5, 2) would contain 10 values.

I should've shown the shapes in the shape information box as (X,) instead of just X. I think that also made it more confusing.

Credits:
17.08 - End
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Music: Ansia Orchestra - Hack The Planet
Link: https://youtu.be/fthcBrJY5eg
Music provided by: MFY - No Copyright
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

The animations are created with a python library called manim. Manim was first created by Grant Sanderson also known as 3blue1brown (YouTube) and is now actively developed by the manim community. Special thanks to everyone involved in developing the library!
Github: https://github.com/manimcommunity/manim

Contact: smarter.code.yt@gmail.com

Chapters:
00:00 Basics
02:55 Bias
04:00 Dataset
05:25 One-Hot Label Encoding
06:57 Training Loops
08:15 Forward Propagation
10:22 Cost/Error Calculation
12:00 Backpropagation
15:30 Running the Neural Network
16:55 Where to find What
17:17 Outro

Видео Neural Networks Explained from Scratch using Python канала Bot Academy
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
30 января 2021 г. 15:22:46
00:17:38
Яндекс.Метрика