Matthew Tancik: Neural Radiance Fields for View Synthesis
Talk @ Tübingen seminar series of the Autonomous Vision Group
https://uni-tuebingen.de/en/faculties/faculty-of-science/departments/computer-science/lehrstuehle/autonomous-vision/talks/
Neural Radiance Fields for View Synthesis
Matthew Tancik (UC Berkeley)
Abstract: In this talk I will present our recent work on Neural Radiance Fields (NeRFs) for view synthesis. We are able to achieve state-of-the-art results for synthesizing novel views of scenes with complex geometry and view dependent effects from a sparse set of input views by optimizing an underlying continuous volumetric scene function parameterized as a fully-connected deep network. In this work we combine the recent advances in coordinate based neural representations with classic methods for volumetric rendering. In order to recover high frequency content in the scene, we find that it is necessary to map the input coordinates to a higher dimensional space using Fourier features before feeding them through the network. In our followup work we use Neural Tangent Kernel analysis to show that this is equivalent to transforming our network into a stationary kernel with tunable bandwidth.
Bio: Matthew studied Computer Science and Physics at Massachusetts Institute of Technology where he received his bachelor degree in 2016 and master degree in 2017. During his masters degree he worked under the supervision of Ramesh Raskar and Fredo Durand and published his thesis on non-line-of-sight imaging using data driven approaches. He began his PhD in 2018 at UC Berkeley under the supervision of Ren Ng. He is currently interested in exploring the intersection of vision and graphics for robotic perception and view synthesis applications. https://www.matthewtancik.com/
Видео Matthew Tancik: Neural Radiance Fields for View Synthesis канала Andreas Geiger
https://uni-tuebingen.de/en/faculties/faculty-of-science/departments/computer-science/lehrstuehle/autonomous-vision/talks/
Neural Radiance Fields for View Synthesis
Matthew Tancik (UC Berkeley)
Abstract: In this talk I will present our recent work on Neural Radiance Fields (NeRFs) for view synthesis. We are able to achieve state-of-the-art results for synthesizing novel views of scenes with complex geometry and view dependent effects from a sparse set of input views by optimizing an underlying continuous volumetric scene function parameterized as a fully-connected deep network. In this work we combine the recent advances in coordinate based neural representations with classic methods for volumetric rendering. In order to recover high frequency content in the scene, we find that it is necessary to map the input coordinates to a higher dimensional space using Fourier features before feeding them through the network. In our followup work we use Neural Tangent Kernel analysis to show that this is equivalent to transforming our network into a stationary kernel with tunable bandwidth.
Bio: Matthew studied Computer Science and Physics at Massachusetts Institute of Technology where he received his bachelor degree in 2016 and master degree in 2017. During his masters degree he worked under the supervision of Ramesh Raskar and Fredo Durand and published his thesis on non-line-of-sight imaging using data driven approaches. He began his PhD in 2018 at UC Berkeley under the supervision of Ren Ng. He is currently interested in exploring the intersection of vision and graphics for robotic perception and view synthesis applications. https://www.matthewtancik.com/
Видео Matthew Tancik: Neural Radiance Fields for View Synthesis канала Andreas Geiger
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Vincent Sitzmann: Implicit Neural Scene Representations](https://i.ytimg.com/vi/Or9J-DCDGko/default.jpg)
![TUM AI Lecture Series - Understanding and Extending Neural Radiance Fields (Jonathan T. Barron)](https://i.ytimg.com/vi/nRyOzHpcr4Q/default.jpg)
![For the Love of Physics - Walter Lewin - May 16, 2011](https://i.ytimg.com/vi/sJG-rXBbmCc/default.jpg)
![Deep Learning: A Crash Course](https://i.ytimg.com/vi/r0Ogt-q956I/default.jpg)
![NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections [updated]](https://i.ytimg.com/vi/mRAKVQj5LRA/default.jpg)
![Deformable Neural Radiance Fields](https://i.ytimg.com/vi/MrKrnHhk8IA/default.jpg)
![PlenOctrees for Real-time Rendering of Neural Radiance Fields](https://i.ytimg.com/vi/obrmH1T5mfI/default.jpg)
![The Insane Biology of: The Octopus](https://i.ytimg.com/vi/mFP_AjJeP-M/default.jpg)
![NeRF: Neural Radiance Fields](https://i.ytimg.com/vi/JuH79E8rdKc/default.jpg)
![Matthias Niessner - Why Neural Rendering is Super Cool!](https://i.ytimg.com/vi/-KGZmzP4P1I/default.jpg)
![[ECCV 2020] NeRF: Neural Radiance Fields (10 min talk)](https://i.ytimg.com/vi/LRAqeM8EjOo/default.jpg)
![Learning 3D Reconstruction in Function Space](https://i.ytimg.com/vi/kxKI8_Si2a0/default.jpg)
![But what is a Neural Network? | Deep learning, chapter 1](https://i.ytimg.com/vi/aircAruvnKk/default.jpg)
![NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections](https://i.ytimg.com/vi/yPKIxoN2Vf0/default.jpg)
![Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields](https://i.ytimg.com/vi/EpH175PY1A0/default.jpg)
![Andrej Karpathy - AI for Full-Self Driving at Tesla](https://i.ytimg.com/vi/hx7BXih7zx8/default.jpg)
![Zhengqi Li: Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes](https://i.ytimg.com/vi/jYsVNtK4LNs/default.jpg)
![SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)](https://i.ytimg.com/vi/Q5g3p9Zwjrk/default.jpg)
![Jon Barron - Understanding and Extending Neural Radiance Fields](https://i.ytimg.com/vi/HfJpQCBTqZs/default.jpg)
![AutoInt: Automatic Integration for Fast Neural Volume Rendering | CVPR 2021](https://i.ytimg.com/vi/GYxFYbih0PU/default.jpg)