DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators.
George Karniadakis, Brown University
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.
Видео DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators. канала MITCBMM
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.
Видео DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators. канала MITCBMM
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Eikonal solution using physics-informed neural networksS18 Lecture 20: Hopfield Networks 1George Karniadakis - From PINNs to DeepOnetsMachine Learning for Fluid MechanicsWhat's a Tensor?Why 5/3 is a fundamental constant for turbulencePyTorch at Tesla - Andrej Karpathy, TeslaMarty Lobdell - Study Less Study SmartMachine Learning Zero to Hero (Google I/O'19)Variational methods and deep learning for high-dimensional dynamical systemsSIGGRAPH 2020 | Learned Motion MatchingSolving PDEs with the FFT [Python]Deep Learning CarsThe Universal Approximation Theorem for neural networksKevin Carlberg - AI for Computational Physics: Toward real-time high-fidelity simulationGao and Ni on a deep learning method to predict elastic modulus fieldDeep Learning State of the Art (2020)MSML2020 Invited Talk by Prof. George Karniadakis, Brown UniversityComputation and Learning with Assemblies of Neurons11. Introduction to Machine Learning