Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals
Julien Martel, Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab
David B. Lindell, Postdoctoral Scholar at Stanford University and incoming Assistant Professor in the Dept. of CS at University of Toronto
For more details: https://www.meetup.com/SV-SIGGRAPH/events/282027117/
Abstract
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations and new capabilities in neural rendering and view synthesis. However, conventional network architectures for such implicit neural representations are incapable of modeling signals at scale with fine detail and fail to represent derivatives and integrals of signals. In this talk, we describe three recent approaches to solve these challenging problems. First, we introduce sinusoidal representation networks or SIREN, which are ideally suited for representing complex natural signals and their derivatives. Using SIREN, we can represent images, wavefields, video, sound, and their derivatives, allowing us to solve differential equations using this type of neural network. Second, we introduce a new framework for solving integral equations using implicit neural representation networks. Our automatic integration framework, AutoInt, enables the calculation of any definite integral with two evaluations of a neural network. This allows fast inference and rendering when applied to neural rendering techniques based on volume rendering. Finally, we introduce a new architecture and method for scaling up implicit representations, called Adaptive Coordinate Networks (ACORN). The approach relies on a hybrid implicit–explicit representation and a learned, online multiscale decomposition of the target signal. We use ACORN to demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio (an 1000x increase in scale over previous experiments), and we reduce training times for 3D shape fitting from days to hours or minutes while improving memory requirements by over an order of magnitude.
Bios
Julien Martel (http://www.jmartel.net/) is a Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab led by Gordon Wetzstein. His research interests are in unconventional visual sensing and processing. More specifically, his current topics of research include the co-design of hardware and algorithms for visual sensing, the design of methods for vision sensors with in-pixel computing capabilities, and the use of novel representations for visual data such as neural implicit representations.
David B. Lindell (https://davidlindell.com) is a postdoctoral scholar at Stanford University and an incoming Assistant Professor in the Department of Computer Science at University of Toronto. He received a PhD in Electrical Engineering from Stanford University and is the recipient of the ACM SIGGRAPH 2021 Outstanding Dissertation Honorable Mention Award. His research spans the areas of computational imaging, computer vision, and machine learning with a focus on new methods for active 3D imaging and physics-based machine learning.
#NeuralRendering
Joint Event with SF Bay ACM Chapter
https://www.meetup.com/SF-Bay-ACM/
0:00 ACM Chapters Introductions
2:28 Speaker Introductions
3:26 Presentation
4:15 Signals: Images, Shapes, Audio
6:34 Differential Equations / Derivatives
7:21 SIREN: Sinusoidal Representation Networks
9:25 Related Work
10:28 SIREN Examples
11:20 Images
15:00 Audio
21:27 Videos
24:29 Poisson equation
27:15 Eikonal equation
31:13 Helmholtz equation
32:49 Wave equation
33:34 Learning priors over the space of SIREN functions
34:43 Like discrete grid or point clouds, SIREN is a data representation
35:19 With a number of benefits
37:18 Challenges Towards Large Scale Neural Representations
38:59 Challenges: Explicit / Implicit vs Efficiency / Multiscale / Pruning Chart
48:55 ACORN: an hybrid implicit-explicit architecture
49:56 Image Fitting Example (16 MP)
50:39 Scaling Up (64 MP)
50:56 ACORN Architecture
57:17 Gigapixel Image Fitting
59:06 Large-Scale 3D Shapes
1:00:36 Key question: How to operate on signals represented with coordinate-based networks?
1:02:05 AutoInt: Automatic Integration for Coordinate-based Networks
1:05:12 Volume Rendering Integration
1:06:01 Coordinate-based networks
1:06:34 Differentiation vs Integration
1:07:35 Numerical Integration Techniques
1:08:08 Automatic integration
1:08:31 Integral Network
1:10:06 AutoInt steps
1:10:45 Implementation
1:11:05 Implementation in a “compiler”
1:11:53 Example
1:12:38 Example Tomography
1:16:37 Volume Rendering Equation (VRE)
1:18:59 Approximation of the VRE
1:19:22 AutoInt Examples
1:21:45 AutoInt: open questions
1:23:47 Summary
1:26:00 Contacts, Publications, References, Collaborators
1:26:43 More Q&A
Видео Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals канала Silicon Valley ACM SIGGRAPH
David B. Lindell, Postdoctoral Scholar at Stanford University and incoming Assistant Professor in the Dept. of CS at University of Toronto
For more details: https://www.meetup.com/SV-SIGGRAPH/events/282027117/
Abstract
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations and new capabilities in neural rendering and view synthesis. However, conventional network architectures for such implicit neural representations are incapable of modeling signals at scale with fine detail and fail to represent derivatives and integrals of signals. In this talk, we describe three recent approaches to solve these challenging problems. First, we introduce sinusoidal representation networks or SIREN, which are ideally suited for representing complex natural signals and their derivatives. Using SIREN, we can represent images, wavefields, video, sound, and their derivatives, allowing us to solve differential equations using this type of neural network. Second, we introduce a new framework for solving integral equations using implicit neural representation networks. Our automatic integration framework, AutoInt, enables the calculation of any definite integral with two evaluations of a neural network. This allows fast inference and rendering when applied to neural rendering techniques based on volume rendering. Finally, we introduce a new architecture and method for scaling up implicit representations, called Adaptive Coordinate Networks (ACORN). The approach relies on a hybrid implicit–explicit representation and a learned, online multiscale decomposition of the target signal. We use ACORN to demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio (an 1000x increase in scale over previous experiments), and we reduce training times for 3D shape fitting from days to hours or minutes while improving memory requirements by over an order of magnitude.
Bios
Julien Martel (http://www.jmartel.net/) is a Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab led by Gordon Wetzstein. His research interests are in unconventional visual sensing and processing. More specifically, his current topics of research include the co-design of hardware and algorithms for visual sensing, the design of methods for vision sensors with in-pixel computing capabilities, and the use of novel representations for visual data such as neural implicit representations.
David B. Lindell (https://davidlindell.com) is a postdoctoral scholar at Stanford University and an incoming Assistant Professor in the Department of Computer Science at University of Toronto. He received a PhD in Electrical Engineering from Stanford University and is the recipient of the ACM SIGGRAPH 2021 Outstanding Dissertation Honorable Mention Award. His research spans the areas of computational imaging, computer vision, and machine learning with a focus on new methods for active 3D imaging and physics-based machine learning.
#NeuralRendering
Joint Event with SF Bay ACM Chapter
https://www.meetup.com/SF-Bay-ACM/
0:00 ACM Chapters Introductions
2:28 Speaker Introductions
3:26 Presentation
4:15 Signals: Images, Shapes, Audio
6:34 Differential Equations / Derivatives
7:21 SIREN: Sinusoidal Representation Networks
9:25 Related Work
10:28 SIREN Examples
11:20 Images
15:00 Audio
21:27 Videos
24:29 Poisson equation
27:15 Eikonal equation
31:13 Helmholtz equation
32:49 Wave equation
33:34 Learning priors over the space of SIREN functions
34:43 Like discrete grid or point clouds, SIREN is a data representation
35:19 With a number of benefits
37:18 Challenges Towards Large Scale Neural Representations
38:59 Challenges: Explicit / Implicit vs Efficiency / Multiscale / Pruning Chart
48:55 ACORN: an hybrid implicit-explicit architecture
49:56 Image Fitting Example (16 MP)
50:39 Scaling Up (64 MP)
50:56 ACORN Architecture
57:17 Gigapixel Image Fitting
59:06 Large-Scale 3D Shapes
1:00:36 Key question: How to operate on signals represented with coordinate-based networks?
1:02:05 AutoInt: Automatic Integration for Coordinate-based Networks
1:05:12 Volume Rendering Integration
1:06:01 Coordinate-based networks
1:06:34 Differentiation vs Integration
1:07:35 Numerical Integration Techniques
1:08:08 Automatic integration
1:08:31 Integral Network
1:10:06 AutoInt steps
1:10:45 Implementation
1:11:05 Implementation in a “compiler”
1:11:53 Example
1:12:38 Example Tomography
1:16:37 Volume Rendering Equation (VRE)
1:18:59 Approximation of the VRE
1:19:22 AutoInt Examples
1:21:45 AutoInt: open questions
1:23:47 Summary
1:26:00 Contacts, Publications, References, Collaborators
1:26:43 More Q&A
Видео Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals канала Silicon Valley ACM SIGGRAPH
Комментарии отсутствуют
Информация о видео
1 февраля 2022 г. 4:30:31
01:37:55
Другие видео канала