Загрузка страницы

6. L1 & L2 Regularization

We introduce "regularization", our main defense against overfitting. We discuss the equivalence of the penalization and constraint forms of regularization (see Hwk 4 Problem 8 for a precise statement). We compare regularization paths of L1- and L2-regularized linear least squares regression (i.e. "lasso" and "ridge" regression, respectively), and give a geometric argument for why lasso often gives "sparse" solutions. Finally, we present "coordinate descent", our second major approach to optimization. When applied to the lasso objective function, coordinate descent takes a particularly clean form and is known as the "shooting algorithm"

Access the full course at https://bloom.bg/2ui2T4q

Видео 6. L1 & L2 Regularization канала Inside Bloomberg
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
11 июля 2018 г. 17:55:27
01:26:42
Яндекс.Метрика