Загрузка страницы

Yury Korolev - Approximation properties of two-layer neural networks with values in a Banach space

Presentation given by Yury Korolev on 6th October in the one world seminar on the mathematics of machine learning on the topic "Approximation properties of two-layer neural networks with values in a Banach space".

Abstract: Approximation properties of infinitely wide neural networks have been studied by several authors in the last few years. New function spaces have been introduced that consist of functions that can be efficiently (i.e., with dimension-independent rates) approximated by neural networks of finite width. Typically, these functions are supposed to act between Euclidean spaces, typically with a high-dimensional input space and a lower-dimensional output space. As neural networks gain popularity in inherently infinite-dimensional settings such as inverse problems and imaging, it becomes necessary to analyse the properties of neural networks as nonlinear operators acting between infinite-dimensional spaces. In this talk, I will present dimension-independent Monte-Carlo rates for neural networks acting between Banach spaces with a partial order (vector lattices), where the ReLU nonlinearity will be interpreted as the lattice operation of taking the positive part.

Видео Yury Korolev - Approximation properties of two-layer neural networks with values in a Banach space канала One world theoretical machine learning
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
7 октября 2021 г. 14:25:03
00:39:42
Другие видео канала
Lukasz Szpruch - Mean-Field Neural ODEs, Relaxed Control and Generalization ErrorsLukasz Szpruch - Mean-Field Neural ODEs, Relaxed Control and Generalization ErrorsMatthew Colbrook - Smale’s 18th Problem and the Barriers of Deep LearningMatthew Colbrook - Smale’s 18th Problem and the Barriers of Deep LearningYu Bai - How Important is the Train-Validation Split in Meta-Learning?Yu Bai - How Important is the Train-Validation Split in Meta-Learning?Anna Korba - Kernel Stein Discrepancy DescentAnna Korba - Kernel Stein Discrepancy DescentAnirbit Mukherjee - Provable Training of Neural Nets With One Layer of ActivationAnirbit Mukherjee - Provable Training of Neural Nets With One Layer of ActivationKevin Miller - Ensuring Exploration and Exploitation in Graph-Based Active LearningKevin Miller - Ensuring Exploration and Exploitation in Graph-Based Active LearningTheo Bourdais - Computational Hypergraph Discovery, a Gaussian Process frameworkTheo Bourdais - Computational Hypergraph Discovery, a Gaussian Process frameworkYaoqing Yang - Predicting & improving generalization by measuring loss landscapes & weight matricesYaoqing Yang - Predicting & improving generalization by measuring loss landscapes & weight matricesKonstantinos Spiliopoulos - Mean field limits of neural networks: typical behavior and fluctuationsKonstantinos Spiliopoulos - Mean field limits of neural networks: typical behavior and fluctuationsNadia Drenska - A PDE Interpretation of Prediction with Expert AdviceNadia Drenska - A PDE Interpretation of Prediction with Expert AdviceMatthias Ehrhardt - Bilevel Learning for Inverse ProblemsMatthias Ehrhardt - Bilevel Learning for Inverse ProblemsPeter Richtarik - The Resolution of a Question Related to Local Training in Federated LearningPeter Richtarik - The Resolution of a Question Related to Local Training in Federated LearningMarcus Hutter - Testing Independence of Exchangeable Random VariablesMarcus Hutter - Testing Independence of Exchangeable Random VariablesSophie Langer - Circumventing the curse of dimensionality with deep neural networksSophie Langer - Circumventing the curse of dimensionality with deep neural networksStephan Mandt - Compressing Variational Bayes: From neural data compression to video predictionStephan Mandt - Compressing Variational Bayes: From neural data compression to video predictionDerek Driggs - Barriers to Deploying Deep Learning Models During the COVID-19 PandemicDerek Driggs - Barriers to Deploying Deep Learning Models During the COVID-19 PandemicGal Vardi - Implications of the implicit bias in neural networksGal Vardi - Implications of the implicit bias in neural networksZiwei Ji - The dual of the margin: improved analyses and rates for gradient descent’s implicit biasZiwei Ji - The dual of the margin: improved analyses and rates for gradient descent’s implicit biasQi Lei - Predicting What You Already Know Helps: Provable Self-Supervised LearningQi Lei - Predicting What You Already Know Helps: Provable Self-Supervised LearningAlessandro Scagliotti - Deep Learning Approximation of Diffeomorphisms via Linear-Control SystemsAlessandro Scagliotti - Deep Learning Approximation of Diffeomorphisms via Linear-Control Systems
Яндекс.Метрика