Implicit Neural Representations with Periodic Activation Functions
-- Project page --
https://vsitzmann.github.io/siren
-- arXiv preprint --
https://arxiv.org/abs/2006.09661
-- Abstract --
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions.
Видео Implicit Neural Representations with Periodic Activation Functions канала Stanford Computational Imaging Lab
https://vsitzmann.github.io/siren
-- arXiv preprint --
https://arxiv.org/abs/2006.09661
-- Abstract --
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions.
Видео Implicit Neural Representations with Periodic Activation Functions канала Stanford Computational Imaging Lab
Показать
Комментарии отсутствуют
Информация о видео
18 июня 2020 г. 4:03:59
00:10:20
Другие видео канала
SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)Implicit Neural Representations: From Objects to 3D ScenesThe Universal Approximation Theorem for neural networksOpenAI Plays Hide and Seek…and Breaks The Game! 🤖Eye Tracking RevisitedAI learns to play Google Chrome Dinosaur Game || Can you beat it??Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance FieldsDeep Image Prior | Two Minute Papers #219Vincent Sitzmann: Implicit Neural Scene RepresentationsSimon Peyton Jones - Haskell is uselessPredicting with a Neural Network explainedCSC2547 SIREN: Implicit Neural Representations with Periodic Activation FunctionsNon-Euclidean Geometry Explained - Hyperbolica Devlog #1Solving PDEs using Machine Learning by Balaji Srinivasan, IIT MadrasHow Do Neural Networks Learn? 🤖SIREN in PyTorchIntroduction to Scientific Machine Learning 2: Physics-Informed Neural NetworksRigNet: Neural Rigging for Articulated CharactersNeural Lumigraph Rendering | CVPR 2021[CVPR 2022 Oral] MaGNet