Few-Shot Learning (1/3): Basic Concepts
Next video: https://youtu.be/4S-XDefSjTM
This lecture introduces the basic concepts of few-shot learning and meta-learning, the definition of "way" and "shot", and two commonly used datasets (Omniglot and Mini-ImageNet).
Slides: https://github.com/wangshusen/DeepLearning
Lectures on few-shot learning:
1. Basic concepts: https://youtu.be/hE7eGew4eeg
2. Siamese networks: https://youtu.be/4S-XDefSjTM
3. Pretraining and fine-tuning: https://youtu.be/U6uFOIURcD0
Видео Few-Shot Learning (1/3): Basic Concepts канала Shusen Wang
This lecture introduces the basic concepts of few-shot learning and meta-learning, the definition of "way" and "shot", and two commonly used datasets (Omniglot and Mini-ImageNet).
Slides: https://github.com/wangshusen/DeepLearning
Lectures on few-shot learning:
1. Basic concepts: https://youtu.be/hE7eGew4eeg
2. Siamese networks: https://youtu.be/4S-XDefSjTM
3. Pretraining and fine-tuning: https://youtu.be/U6uFOIURcD0
Видео Few-Shot Learning (1/3): Basic Concepts канала Shusen Wang
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
RL-1C: Randomness in MDP, Agent-Environment Interaction6-1: Binary Tree Basics3-1: Insertion Sort5-1: Matrix basics: additions, multiplications, time complexity analysisRL-1A: Random Variables, Observations, Random SamplingRL-1G: SummarySelf-Attenion for RNN (1.25x speed recommended)RL-1D: Rewards and Returns2-1: Array, Vector, and List: Comparisons2-2: Binary SearchBERT for pretraining TransformersTransformer Model (2/2): Build a Deep Neural Network (1.25x speed recommended)Attention for RNN Seq2Seq Models (1.25x speed recommended)Few-Shot Learning (2/3): Siamese NetworksFew-Shot Learning (3/3): Pretraining + Fine-tuningVision Transformer for Image Classification5-2: Dense Matrices: row-major order, column-major orderTransformer Model (1/2): Attention Layers17-1: Monte Carlo AlgorithmsRL-1B: State, Action, Reward, Policy, State Transition