Vincent Sitzmann: Implicit Neural Scene Representations
Talk @ Tübingen seminar series of the Autonomous Vision Group
https://uni-tuebingen.de/en/faculties/faculty-of-science/departments/computer-science/lehrstuehle/autonomous-vision/talks/
Implicit Neural Scene Representations
Vincent Sitzmann (Stanford)
Abstract: How we represent signals has major implications for the algorithms we build to analyze them. Today, most signals are represented discretely: Images as grids of pixels, shapes as point clouds, audio as grids of amplitudes, etc. If images weren't pixel grids - would we be using convolutional neural networks today? What makes a good or bad representation? Can we do better? I will talk about leveraging emerging implicit neural representations for complex & large signals, such as room-scale geometry, images, audio, video, and physical signals defined via partial differential equations. By embedding an implicit scene representation in a neural rendering framework and learning a prior over these representations, I will show how we can enable 3D reconstruction from only a single posed 2D image. Finally, I will show how gradient-based meta-learning can enable fast inference of implicit representations, and how the features we learn in the process are already useful to the downstream task of semantic segmentation.
Bio: Vincent Sitzmann just finished his PhD at Stanford University with a thesis on "Self-Supervised Scene Representation Learning". His research interest lies in neural scene representations - the way neural networks learn to represent information on our world. His goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI. In July, Vincent will join Joshua Tenenbaum's group at MIT CSAIL for a Postdoc. https://vsitzmann.github.io/
Видео Vincent Sitzmann: Implicit Neural Scene Representations канала Andreas Geiger
https://uni-tuebingen.de/en/faculties/faculty-of-science/departments/computer-science/lehrstuehle/autonomous-vision/talks/
Implicit Neural Scene Representations
Vincent Sitzmann (Stanford)
Abstract: How we represent signals has major implications for the algorithms we build to analyze them. Today, most signals are represented discretely: Images as grids of pixels, shapes as point clouds, audio as grids of amplitudes, etc. If images weren't pixel grids - would we be using convolutional neural networks today? What makes a good or bad representation? Can we do better? I will talk about leveraging emerging implicit neural representations for complex & large signals, such as room-scale geometry, images, audio, video, and physical signals defined via partial differential equations. By embedding an implicit scene representation in a neural rendering framework and learning a prior over these representations, I will show how we can enable 3D reconstruction from only a single posed 2D image. Finally, I will show how gradient-based meta-learning can enable fast inference of implicit representations, and how the features we learn in the process are already useful to the downstream task of semantic segmentation.
Bio: Vincent Sitzmann just finished his PhD at Stanford University with a thesis on "Self-Supervised Scene Representation Learning". His research interest lies in neural scene representations - the way neural networks learn to represent information on our world. His goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI. In July, Vincent will join Joshua Tenenbaum's group at MIT CSAIL for a Postdoc. https://vsitzmann.github.io/
Видео Vincent Sitzmann: Implicit Neural Scene Representations канала Andreas Geiger
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Jon Barron - Understanding and Extending Neural Radiance FieldsSIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)CSC2547 DeepSDF Learning Continuous Signed Distance Functions for Shape RepresentationHigh-Resolution Image Synthesis and Semantic Manipulation with Conditional GANsCan We Make An Image Synthesis AI Controllable?KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPsLearned Initializations for Optimizing Coordinate-Based Neural RepresentationsAdvances in Neural Rendering (SIGGRAPH 2021 Course) Part 1 of 2Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains (10min talk)Arxiv 2021: SDF secretsImplicit Neural Representations with Periodic Activation FunctionsTUM AI Lecture Series - Implicit Neural Scene Representations (Vincent Sitzmann)Implicit Neural Representations: From Objects to 3D ScenesCSC2547 NeRF in the Wild Neural Radiance Fields for Unconstrained Photo CollectionsCombining the Transformers Expressivity with the CNNs Efficiency for High-Resolution Image SynthesisBut what is a neural network? | Chapter 1, Deep learningNeural Scene Representation and RenderingMatthew Tancik: Neural Radiance Fields for View SynthesisFourier Features Let Networks Learn High Frequency Functions in Low Dimensional DomainsMODNet: Motion and Apperance Based Moving Object Detection Network