[CVPR 2021] Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
This is the 5-minute video for our CVPR 2021 paper:
"Back to the Feature: Learning Robust Camera Localization from Pixels to Pose"
Project Page: https://psarlin.com/pixloc
Paper: https://arxiv.org/abs/2103.09213
Code: https://github.com/cvg/pixloc
Authors: Paul-Edouard Sarlin*, Ajaykumar Unagar*, Måns Larsson, Hugo Germain, Carl Toft, Viktor Larsson, Marc Pollefeys, Vincent Lepetit, Lars Hammarstrand, Fredrik Kahl, Torsten Sattler.
(* equal contributions).
Abstract:
Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead.
Видео [CVPR 2021] Back to the Feature: Learning Robust Camera Localization from Pixels to Pose канала Paul-Edouard Sarlin
"Back to the Feature: Learning Robust Camera Localization from Pixels to Pose"
Project Page: https://psarlin.com/pixloc
Paper: https://arxiv.org/abs/2103.09213
Code: https://github.com/cvg/pixloc
Authors: Paul-Edouard Sarlin*, Ajaykumar Unagar*, Måns Larsson, Hugo Germain, Carl Toft, Viktor Larsson, Marc Pollefeys, Vincent Lepetit, Lars Hammarstrand, Fredrik Kahl, Torsten Sattler.
(* equal contributions).
Abstract:
Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead.
Видео [CVPR 2021] Back to the Feature: Learning Robust Camera Localization from Pixels to Pose канала Paul-Edouard Sarlin
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![[ICCV 2021] Pixel-Perfect Structure-from-Motion with Featuremetric Refinement](https://i.ytimg.com/vi/2HuCMuraFk0/default.jpg)
![BACON: Band-limited Coordinate Networks for Multiscale Scene Representation | CVPR 2022](https://i.ytimg.com/vi/zIH3KUCgJEA/default.jpg)
![[CVPR 2020] SuperGlue: Learning Feature Matching with Graph Neural Networks (5min oral)](https://i.ytimg.com/vi/BNaIGI4VncM/default.jpg)
![(CVPR 2022) GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras](https://i.ytimg.com/vi/wpObDXcYueo/default.jpg)
![NeuralRecon 5-minutes Introduction Video (CVPR 2021 oral)](https://i.ytimg.com/vi/wuMPaUTJuO0/default.jpg)
![[CVPR 2021] Self-Supervised Multi-Frame Monocular Scene Flow](https://i.ytimg.com/vi/nFLfm3YZ_RI/default.jpg)
![MarI/O - Machine Learning for Video Games](https://i.ytimg.com/vi/qv6UVOQ0F44/default.jpg)
![Deep Visual SLAM Frontends: SuperPoint, SuperGlue, and SuperMaps (#CVPR2020 Invited Talk)](https://i.ytimg.com/vi/u7Yo5EtOATQ/default.jpg)
![InsetGAN for Full-Body Image Generation (CVPR 2022)](https://i.ytimg.com/vi/YKFYEt5hvOo/default.jpg)
![11 Secrets to Memorize Things Quicker Than Others](https://i.ytimg.com/vi/mHdy1xS59xA/default.jpg)
![RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs (CVPR 2022)](https://i.ytimg.com/vi/QyyyvA4-Kwc/default.jpg)
![10 Gimbal Moves To Make ANYONE Look EPIC! Filmmaking Tips For Beginners](https://i.ytimg.com/vi/idFRHEbYp8Q/default.jpg)
![SNUG: Self-Supervised Neural Dynamic Garments (CVPR 2022)](https://i.ytimg.com/vi/0Ac--TqWw1Y/default.jpg)
![Efficient Geometry-aware 3D Generative Adversarial Networks | CVPR 2022](https://i.ytimg.com/vi/cXxEwI7QbKg/default.jpg)
![Camera Intrinsics and Extrinsics - 5 Minutes with Cyrill](https://i.ytimg.com/vi/ND2fa08vxkY/default.jpg)
![[CVPR 2021 Tutorial] Leave Those Nets Alone: Advances in Self-Supervised Learning](https://i.ytimg.com/vi/MdD4UMshl1Q/default.jpg)
![Incremental Structure from Motion (SFM) powered by OpenCV](https://i.ytimg.com/vi/mQgO73O3alA/default.jpg)
![Pose Estimation of Objects in OpenCV Python](https://i.ytimg.com/vi/US9p9CL9Ywg/default.jpg)
![How to Do a Presentation - 5 Steps to a Killer Opener](https://i.ytimg.com/vi/dEDcc0aCjaA/default.jpg)
![VIBE: Video Inference for Human Body Pose and Shape Estimation (CVPR 2020)](https://i.ytimg.com/vi/rIr-nX63dUA/default.jpg)