17-1: Monte Carlo Algorithms
Next video: https://youtu.be/xaSBvljOQkc
Monte Carlo refers to algorithms that rely on repeated random sampling to obtain numerical results. This lecture teaches Monte Carlo using 5 examples:
0:15 Uniform sampling for estimating π
7:37 Buffon's needle problem
13:20 Area of a region
19:39 Monte Carlo integration
29:29 Estimating expectations
Slides: https://github.com/wangshusen/AdvancedAlgorithms
Видео 17-1: Monte Carlo Algorithms канала Shusen Wang
Monte Carlo refers to algorithms that rely on repeated random sampling to obtain numerical results. This lecture teaches Monte Carlo using 5 examples:
0:15 Uniform sampling for estimating π
7:37 Buffon's needle problem
13:20 Area of a region
19:39 Monte Carlo integration
29:29 Estimating expectations
Slides: https://github.com/wangshusen/AdvancedAlgorithms
Видео 17-1: Monte Carlo Algorithms канала Shusen Wang
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![RL-1C: Randomness in MDP, Agent-Environment Interaction](https://i.ytimg.com/vi/0VWBr6dBMGY/default.jpg)
![6-1: Binary Tree Basics](https://i.ytimg.com/vi/HWPLrH-n0-k/default.jpg)
![3-1: Insertion Sort](https://i.ytimg.com/vi/m5UJM-0gtD8/default.jpg)
![5-1: Matrix basics: additions, multiplications, time complexity analysis](https://i.ytimg.com/vi/ZTtW6SMTmcY/default.jpg)
![RL-1A: Random Variables, Observations, Random Sampling](https://i.ytimg.com/vi/jNcMnwpPpfk/default.jpg)
![RL-1G: Summary](https://i.ytimg.com/vi/DLO401mNOw4/default.jpg)
![Self-Attenion for RNN (1.25x speed recommended)](https://i.ytimg.com/vi/06r6kp7ujCA/default.jpg)
![RL-1D: Rewards and Returns](https://i.ytimg.com/vi/MeoSqrV5a24/default.jpg)
![2-1: Array, Vector, and List: Comparisons](https://i.ytimg.com/vi/Ign3VHqNybs/default.jpg)
![2-2: Binary Search](https://i.ytimg.com/vi/yfHcb1hXt3s/default.jpg)
![BERT for pretraining Transformers](https://i.ytimg.com/vi/EOmd5sUUA_A/default.jpg)
![Attention for RNN Seq2Seq Models (1.25x speed recommended)](https://i.ytimg.com/vi/B3uws4cLcFw/default.jpg)
![Transformer Model (2/2): Build a Deep Neural Network (1.25x speed recommended)](https://i.ytimg.com/vi/J4H6A4-dvhE/default.jpg)
![Few-Shot Learning (2/3): Siamese Networks](https://i.ytimg.com/vi/4S-XDefSjTM/default.jpg)
![Few-Shot Learning (3/3): Pretraining + Fine-tuning](https://i.ytimg.com/vi/U6uFOIURcD0/default.jpg)
![5-2: Dense Matrices: row-major order, column-major order](https://i.ytimg.com/vi/fy_dSZb-Xx8/default.jpg)
![Vision Transformer for Image Classification](https://i.ytimg.com/vi/HZ4j_U3FC94/default.jpg)
![Few-Shot Learning (1/3): Basic Concepts](https://i.ytimg.com/vi/hE7eGew4eeg/default.jpg)
![Transformer Model (1/2): Attention Layers](https://i.ytimg.com/vi/FC8PziPmxnQ/default.jpg)
![RL-1B: State, Action, Reward, Policy, State Transition](https://i.ytimg.com/vi/GFayVUt2WGE/default.jpg)