- Популярные видео
- Авто
- Видео-блоги
- ДТП, аварии
- Для маленьких
- Еда, напитки
- Животные
- Закон и право
- Знаменитости
- Игры
- Искусство
- Комедии
- Красота, мода
- Кулинария, рецепты
- Люди
- Мото
- Музыка
- Мультфильмы
- Наука, технологии
- Новости
- Образование
- Политика
- Праздники
- Приколы
- Природа
- Происшествия
- Путешествия
- Развлечения
- Ржач
- Семья
- Сериалы
- Спорт
- Стиль жизни
- ТВ передачи
- Танцы
- Технологии
- Товары
- Ужасы
- Фильмы
- Шоу-бизнес
- Юмор
The Pitfall Of Using Cross-Validation For Model Selection In Machine Learning
Most explanations of cross-validation stop too early.
They tell you that cross-validation gives a reliable estimate of test error, and for a fixed model that's true.
But in practice, we don’t evaluate just one model. We evaluate many. We tune hyperparameters. We compare models. And then we pick the one with the lowest cross-validation error.
That’s where the problem begins. In this video, I explain:
What cross-validation actually estimates
Why it works for a fixed model
What changes when we perform model selection
Why taking the minimum of many estimates introduces bias
The difference between training optimism and selection bias
Key idea
Cross-validation removes bias from training on the same data
BUT it does not remove bias introduced by model selection.
We are not overfitting the data. We are overfitting the estimates.
This video is part of a deeper series on the mathematical foundations of machine learning, including:
empirical risk minimisation
generalisation
model selection
statistical learning theory
If you want to understand what machine learning is really doing under the hood, this channel is for you.
Видео The Pitfall Of Using Cross-Validation For Model Selection In Machine Learning канала ML & AI: Foundations & Methods
They tell you that cross-validation gives a reliable estimate of test error, and for a fixed model that's true.
But in practice, we don’t evaluate just one model. We evaluate many. We tune hyperparameters. We compare models. And then we pick the one with the lowest cross-validation error.
That’s where the problem begins. In this video, I explain:
What cross-validation actually estimates
Why it works for a fixed model
What changes when we perform model selection
Why taking the minimum of many estimates introduces bias
The difference between training optimism and selection bias
Key idea
Cross-validation removes bias from training on the same data
BUT it does not remove bias introduced by model selection.
We are not overfitting the data. We are overfitting the estimates.
This video is part of a deeper series on the mathematical foundations of machine learning, including:
empirical risk minimisation
generalisation
model selection
statistical learning theory
If you want to understand what machine learning is really doing under the hood, this channel is for you.
Видео The Pitfall Of Using Cross-Validation For Model Selection In Machine Learning канала ML & AI: Foundations & Methods
Комментарии отсутствуют
Информация о видео
6 апреля 2026 г. 13:06:27
00:09:50
Другие видео канала





















