Загрузка...

Training Neural Networks with Activation Functions

🔍 Understanding Activation Functions in Neural Networks | Sigmoid, ReLU, Tanh & More
👋 Hi everyone, I’m Nika! In this session, we’re diving deep into one of the core building blocks of neural networks — activation functions.

🌟 Whether you’re a beginner or brushing up on your deep learning fundamentals, this video will help you understand:

✅ Why activation functions are essential in neural networks
✅ How they introduce non-linearity and control neuron output
✅ Pros and cons of common activation functions:
    • Sigmoid
    • ReLU (Rectified Linear Unit)
    • Leaky ReLU
    • Tanh
✅ The vanishing gradient problem (with a practical example using Fashion MNIST)
✅ How to structure batch-based datasets and train models using small chunks of data
✅ What happens when you use sigmoid in deep networks

🧠 We’ll also discuss how gradients behave during training, and why choosing the right activation function can make or break your model’s learning ability.

👟 Dataset used: Fashion MNIST – a collection of 28x28 grayscale images of clothing items like t-shirts, sneakers, and trousers.

📅 Next time, we’ll explore ReLU, Leaky ReLU, and custom activation functions in more depth!

colab : https://colab.research.google.com/drive/1jHhz4JHelhMNlfH5qowDdv-NE_pQHGZU?usp=sharing

Видео Training Neural Networks with Activation Functions канала AI_INFO
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки

На информационно-развлекательном портале SALDA.WS применяются cookie-файлы. Нажимая кнопку Принять, вы подтверждаете свое согласие на их использование.

Об использовании CookiesПринять