Загрузка страницы

IML17: Grid vs random search for hyperparameter tuning?

In this video I review the benefits of random search over grid search when tuning the hyperparameters of a machine learning model.

I specifically discuss the paper by Bergstra and Bengio, called Random Search for Hyper-Parameter Optimization.
J. Bergstra and Y. Bengio. `Random search for hyper-parameter optimization'. In:
Journal of Machine Learning Research 13 (2012), pp. 281-305.

Видео IML17: Grid vs random search for hyperparameter tuning? канала Bevan Smith 2
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 ноября 2020 г. 19:37:56
00:21:07
Другие видео канала
IML14:  Understanding parameters and hyperparameters in ML using regularizationIML14: Understanding parameters and hyperparameters in ML using regularizationIntroduction to Reinforcement Learning (3):  What is epsilon-greedy?Introduction to Reinforcement Learning (3): What is epsilon-greedy?IML7:  How to evaluate a machine learning model using training/testing and mean squared error.IML7: How to evaluate a machine learning model using training/testing and mean squared error.IML31: Logistic regression (part 4):  Maximum likelihood (b)IML31: Logistic regression (part 4): Maximum likelihood (b)IML30:  Logistic regression (part 3):  Maximum likelihood (a)IML30: Logistic regression (part 3): Maximum likelihood (a)IML27: Performance metrics in machine learning (part 2):  Regression error metrics EXAMPLEIML27: Performance metrics in machine learning (part 2): Regression error metrics EXAMPLEIML26: Performance metrics in machine learning (part 1):  Regression error metricsIML26: Performance metrics in machine learning (part 1): Regression error metricsIntroduction to Reinforcement Learning (4): How to create a reinforcement learning environmentIntroduction to Reinforcement Learning (4): How to create a reinforcement learning environmentIML12:  Why do we need k-fold cross-validation in machine learning? (part 2)IML12: Why do we need k-fold cross-validation in machine learning? (part 2)IML20: Linear regression (part 1):  Every machine learning student should begin hereIML20: Linear regression (part 1): Every machine learning student should begin hereTrends in machine learning:  Artificial intelligence in higher educationTrends in machine learning: Artificial intelligence in higher educationIML32: Logistic regression (part 5):  Maximum likelihood (c)IML32: Logistic regression (part 5): Maximum likelihood (c)Introduction to Machine Learning IML1: What is learning?Introduction to Machine Learning IML1: What is learning?IML23: Linear Regression (Part 4): How to solve least squares analytically, by hand!!IML23: Linear Regression (Part 4): How to solve least squares analytically, by hand!!Introduction to Machine Learning IML6: Measuring model performance,(generalizability and validation)Introduction to Machine Learning IML6: Measuring model performance,(generalizability and validation)IML24:  Linear regression (part 5):  How to solve least squares using gradient descentIML24: Linear regression (part 5): How to solve least squares using gradient descentIML16: After obtaining optimal hyperparameters, what do I do now?IML16: After obtaining optimal hyperparameters, what do I do now?IML28:  Logistic regression (part 1):  IntroductionIML28: Logistic regression (part 1): IntroductionIML13:  K-fold cross-validation is better than train test splitIML13: K-fold cross-validation is better than train test splitIntroduction to Machine Learning IML4: Supervised vs Unsupervised Learning (part 2)Introduction to Machine Learning IML4: Supervised vs Unsupervised Learning (part 2)How I got started in machine learning and data scienceHow I got started in machine learning and data science
Яндекс.Метрика