Загрузка страницы

Nadia Drenska - A PDE Interpretation of Prediction with Expert Advice

Presentation given by Nadia Drenska on December 9th 2020 in the one world seminar on the mathematics of machine learning on the topic "A PDE Interpretation of Prediction with Expert Advice".

Abstract: We study the problem of prediction of binary sequences with expert advice in the online setting, which is a classic example of online machine learning. We interpret the binary sequence as the price history of a stock, and view the predictor as an investor, which converts the problem into a stock prediction problem. In this framework, an investor, who predicts the daily movements of a stock, and an adversarial market, who controls the stock, play against each other over N turns. The investor combines the predictions of n ≥ 2 experts in order to make a decision about how much to invest at each turn, and aims to minimize their regret with respect to the best-performing expert at the end of the game. We consider the problem with history-dependent experts, in which each expert uses the previous d days of history of the market in making their predictions. The prediction problem is played (in part) over a discrete graph called the d dimensional de Bruijn graph.

We focus on an appropriate continuum limit and using methods from optimal control, graph theory, and partial differential equations, we discuss strategies for the investor and the adversarial market. We prove that the value function for this game, rescaled appropriately, converges as N → ∞ at a rate of O(N-1/2) (for C4 payoff functions) to the viscosity solution of a nonlinear degenerate parabolic PDE. It can be understood as the Hamilton-Jacobi-Issacs equation for the two-person game. As a result, we are able to deduce asymptotically optimal strategies for the investor.
This is joint work with Robert Kohn and Jeff Calder.

Видео Nadia Drenska - A PDE Interpretation of Prediction with Expert Advice канала One world theoretical machine learning
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 декабря 2020 г. 23:56:09
00:51:23
Другие видео канала
Lukasz Szpruch - Mean-Field Neural ODEs, Relaxed Control and Generalization ErrorsLukasz Szpruch - Mean-Field Neural ODEs, Relaxed Control and Generalization ErrorsMatthew Colbrook - Smale’s 18th Problem and the Barriers of Deep LearningMatthew Colbrook - Smale’s 18th Problem and the Barriers of Deep LearningYu Bai - How Important is the Train-Validation Split in Meta-Learning?Yu Bai - How Important is the Train-Validation Split in Meta-Learning?Anna Korba - Kernel Stein Discrepancy DescentAnna Korba - Kernel Stein Discrepancy DescentAnirbit Mukherjee - Provable Training of Neural Nets With One Layer of ActivationAnirbit Mukherjee - Provable Training of Neural Nets With One Layer of ActivationKevin Miller - Ensuring Exploration and Exploitation in Graph-Based Active LearningKevin Miller - Ensuring Exploration and Exploitation in Graph-Based Active LearningTheo Bourdais - Computational Hypergraph Discovery, a Gaussian Process frameworkTheo Bourdais - Computational Hypergraph Discovery, a Gaussian Process frameworkYaoqing Yang - Predicting & improving generalization by measuring loss landscapes & weight matricesYaoqing Yang - Predicting & improving generalization by measuring loss landscapes & weight matricesKonstantinos Spiliopoulos - Mean field limits of neural networks: typical behavior and fluctuationsKonstantinos Spiliopoulos - Mean field limits of neural networks: typical behavior and fluctuationsMatthias Ehrhardt - Bilevel Learning for Inverse ProblemsMatthias Ehrhardt - Bilevel Learning for Inverse ProblemsPeter Richtarik - The Resolution of a Question Related to Local Training in Federated LearningPeter Richtarik - The Resolution of a Question Related to Local Training in Federated LearningMarcus Hutter - Testing Independence of Exchangeable Random VariablesMarcus Hutter - Testing Independence of Exchangeable Random VariablesYury Korolev - Approximation properties of two-layer neural networks with values in a Banach spaceYury Korolev - Approximation properties of two-layer neural networks with values in a Banach spaceSophie Langer - Circumventing the curse of dimensionality with deep neural networksSophie Langer - Circumventing the curse of dimensionality with deep neural networksStephan Mandt - Compressing Variational Bayes: From neural data compression to video predictionStephan Mandt - Compressing Variational Bayes: From neural data compression to video predictionDerek Driggs - Barriers to Deploying Deep Learning Models During the COVID-19 PandemicDerek Driggs - Barriers to Deploying Deep Learning Models During the COVID-19 PandemicGal Vardi - Implications of the implicit bias in neural networksGal Vardi - Implications of the implicit bias in neural networksZiwei Ji - The dual of the margin: improved analyses and rates for gradient descent’s implicit biasZiwei Ji - The dual of the margin: improved analyses and rates for gradient descent’s implicit biasQi Lei - Predicting What You Already Know Helps: Provable Self-Supervised LearningQi Lei - Predicting What You Already Know Helps: Provable Self-Supervised LearningAlessandro Scagliotti - Deep Learning Approximation of Diffeomorphisms via Linear-Control SystemsAlessandro Scagliotti - Deep Learning Approximation of Diffeomorphisms via Linear-Control Systems
Яндекс.Метрика