Few-Shot Learning (2/3): Siamese Networks
Next Video: https://youtu.be/U6uFOIURcD0
This lecture introduces the Siamese network. It can find similarities or distances in the feature space and thereby solve few-shot learning.
Sides: https://github.com/wangshusen/DeepLearning
Lectures on few-shot learning:
1. Basic concepts: https://youtu.be/hE7eGew4eeg
2. Siamese networks: https://youtu.be/4S-XDefSjTM
3. Pretraining and fine-tuning: https://youtu.be/U6uFOIURcD0
Видео Few-Shot Learning (2/3): Siamese Networks канала Shusen Wang
This lecture introduces the Siamese network. It can find similarities or distances in the feature space and thereby solve few-shot learning.
Sides: https://github.com/wangshusen/DeepLearning
Lectures on few-shot learning:
1. Basic concepts: https://youtu.be/hE7eGew4eeg
2. Siamese networks: https://youtu.be/4S-XDefSjTM
3. Pretraining and fine-tuning: https://youtu.be/U6uFOIURcD0
Видео Few-Shot Learning (2/3): Siamese Networks канала Shusen Wang
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![RL-1C: Randomness in MDP, Agent-Environment Interaction](https://i.ytimg.com/vi/0VWBr6dBMGY/default.jpg)
![6-1: Binary Tree Basics](https://i.ytimg.com/vi/HWPLrH-n0-k/default.jpg)
![3-1: Insertion Sort](https://i.ytimg.com/vi/m5UJM-0gtD8/default.jpg)
![5-1: Matrix basics: additions, multiplications, time complexity analysis](https://i.ytimg.com/vi/ZTtW6SMTmcY/default.jpg)
![RL-1A: Random Variables, Observations, Random Sampling](https://i.ytimg.com/vi/jNcMnwpPpfk/default.jpg)
![RL-1G: Summary](https://i.ytimg.com/vi/DLO401mNOw4/default.jpg)
![Self-Attenion for RNN (1.25x speed recommended)](https://i.ytimg.com/vi/06r6kp7ujCA/default.jpg)
![RL-1D: Rewards and Returns](https://i.ytimg.com/vi/MeoSqrV5a24/default.jpg)
![2-1: Array, Vector, and List: Comparisons](https://i.ytimg.com/vi/Ign3VHqNybs/default.jpg)
![2-2: Binary Search](https://i.ytimg.com/vi/yfHcb1hXt3s/default.jpg)
![BERT for pretraining Transformers](https://i.ytimg.com/vi/EOmd5sUUA_A/default.jpg)
![Attention for RNN Seq2Seq Models (1.25x speed recommended)](https://i.ytimg.com/vi/B3uws4cLcFw/default.jpg)
![Transformer Model (2/2): Build a Deep Neural Network (1.25x speed recommended)](https://i.ytimg.com/vi/J4H6A4-dvhE/default.jpg)
![Few-Shot Learning (3/3): Pretraining + Fine-tuning](https://i.ytimg.com/vi/U6uFOIURcD0/default.jpg)
![5-2: Dense Matrices: row-major order, column-major order](https://i.ytimg.com/vi/fy_dSZb-Xx8/default.jpg)
![Vision Transformer for Image Classification](https://i.ytimg.com/vi/HZ4j_U3FC94/default.jpg)
![Few-Shot Learning (1/3): Basic Concepts](https://i.ytimg.com/vi/hE7eGew4eeg/default.jpg)
![17-1: Monte Carlo Algorithms](https://i.ytimg.com/vi/CmpWM2HMhxw/default.jpg)
![Transformer Model (1/2): Attention Layers](https://i.ytimg.com/vi/FC8PziPmxnQ/default.jpg)
![RL-1B: State, Action, Reward, Policy, State Transition](https://i.ytimg.com/vi/GFayVUt2WGE/default.jpg)