Загрузка страницы

23. Gradient Boosting

Gradient boosting is an approach to "adaptive basis function modeling", in which we learn a linear combination of M basis functions, which are themselves learned from a base hypothesis space H. Gradient boosting may do ERM with any subdifferentiable loss function over any base hypothesis space on which we can do regression. Regression trees are the most commonly used base hypothesis space. It is important to note that the "regression" in "gradient boosted regression trees" (GBRTs) refers to how we fit the basis functions, not the overall loss function. GBRTs can used for classification and conditional probability modeling. GBRTs are among the most dominant methods in competitive machine learning (e.g. Kaggle competitions).

More...If the base hypothesis space H has a nice parameterization (say differentiable, in a certain sense), then we may be able to use standard gradient-based optimization methods directly. In fact, neural networks may be considered in this category. However, if the base hypothesis space H consists of trees, then no such parameterization exists. This is where gradient boosting is really needed.

For practical applications, it would be worth checking out the GBRT implementations in XGBoost and LightGBM.

Видео 23. Gradient Boosting канала Inside Bloomberg
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
11 июля 2018 г. 18:10:26
01:24:35
Яндекс.Метрика