IML17: Grid vs random search for hyperparameter tuning?
In this video I review the benefits of random search over grid search when tuning the hyperparameters of a machine learning model.
I specifically discuss the paper by Bergstra and Bengio, called Random Search for Hyper-Parameter Optimization.
J. Bergstra and Y. Bengio. `Random search for hyper-parameter optimization'. In:
Journal of Machine Learning Research 13 (2012), pp. 281-305.
Видео IML17: Grid vs random search for hyperparameter tuning? канала Bevan Smith 2
I specifically discuss the paper by Bergstra and Bengio, called Random Search for Hyper-Parameter Optimization.
J. Bergstra and Y. Bengio. `Random search for hyper-parameter optimization'. In:
Journal of Machine Learning Research 13 (2012), pp. 281-305.
Видео IML17: Grid vs random search for hyperparameter tuning? канала Bevan Smith 2
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![IML14: Understanding parameters and hyperparameters in ML using regularization](https://i.ytimg.com/vi/HGVEV292-7Y/default.jpg)
![Introduction to Reinforcement Learning (3): What is epsilon-greedy?](https://i.ytimg.com/vi/z6-Cz-pElGA/default.jpg)
![IML7: How to evaluate a machine learning model using training/testing and mean squared error.](https://i.ytimg.com/vi/fssBHCVNnNk/default.jpg)
![IML31: Logistic regression (part 4): Maximum likelihood (b)](https://i.ytimg.com/vi/NR9xYRTI_oI/default.jpg)
![IML30: Logistic regression (part 3): Maximum likelihood (a)](https://i.ytimg.com/vi/McmWMv-pufc/default.jpg)
![IML27: Performance metrics in machine learning (part 2): Regression error metrics EXAMPLE](https://i.ytimg.com/vi/AAQulNnOLD0/default.jpg)
![IML26: Performance metrics in machine learning (part 1): Regression error metrics](https://i.ytimg.com/vi/7HtI1D1aafo/default.jpg)
![Introduction to Reinforcement Learning (4): How to create a reinforcement learning environment](https://i.ytimg.com/vi/Mq6AMGczt70/default.jpg)
![IML12: Why do we need k-fold cross-validation in machine learning? (part 2)](https://i.ytimg.com/vi/g31z6HyyoI0/default.jpg)
![IML20: Linear regression (part 1): Every machine learning student should begin here](https://i.ytimg.com/vi/soE96K27wyo/default.jpg)
![Trends in machine learning: Artificial intelligence in higher education](https://i.ytimg.com/vi/GxTTE9MWuQQ/default.jpg)
![IML32: Logistic regression (part 5): Maximum likelihood (c)](https://i.ytimg.com/vi/PnnmrtGhd2I/default.jpg)
![Introduction to Machine Learning IML1: What is learning?](https://i.ytimg.com/vi/S7_xqXdh7UE/default.jpg)
![IML23: Linear Regression (Part 4): How to solve least squares analytically, by hand!!](https://i.ytimg.com/vi/Jot-sYbzqIQ/default.jpg)
![Introduction to Machine Learning IML6: Measuring model performance,(generalizability and validation)](https://i.ytimg.com/vi/ZCNY0lgmoEg/default.jpg)
![IML24: Linear regression (part 5): How to solve least squares using gradient descent](https://i.ytimg.com/vi/snKw00089Vs/default.jpg)
![IML16: After obtaining optimal hyperparameters, what do I do now?](https://i.ytimg.com/vi/3y9fyV4lGTg/default.jpg)
![IML28: Logistic regression (part 1): Introduction](https://i.ytimg.com/vi/vOl_pKenYTY/default.jpg)
![IML13: K-fold cross-validation is better than train test split](https://i.ytimg.com/vi/mEiIShH6SNc/default.jpg)
![Introduction to Machine Learning IML4: Supervised vs Unsupervised Learning (part 2)](https://i.ytimg.com/vi/WlLVDxor-aY/default.jpg)
![How I got started in machine learning and data science](https://i.ytimg.com/vi/meUYyprLgT8/default.jpg)