Disambiguating Monocular Depth Estimation with a Single Transient (ECCV 2020)
Monocular depth estimation algorithms successfully predict the relative depth order of objects in a scene. However, because of the fundamental scale ambiguity associated with monocular images, these algorithms fail at correctly predicting true metric depth. In this work, we demonstrate how a depth histogram of the scene, which can be readily captured using a single-pixel time-resolved detector, can be fused with the output of existing monocular depth estimation algorithms to resolve the depth ambiguity problem. We validate this novel sensor fusion technique experimentally and in extensive simulation. We show that it significantly improves the performance of several state-of-the-art monocular depth estimation algorithms.
Видео Disambiguating Monocular Depth Estimation with a Single Transient (ECCV 2020) канала Stanford Computational Imaging Lab
Видео Disambiguating Monocular Depth Estimation with a Single Transient (ECCV 2020) канала Stanford Computational Imaging Lab
Показать
Комментарии отсутствуют
Информация о видео
19 июля 2020 г. 4:17:11
00:05:47
Другие видео канала
![Object Depth from Motion and Segmentation: ECCV 2020 Presentation](https://i.ytimg.com/vi/ZD4Y4oQbdks/default.jpg)
![Consistent Video Depth Estimation](https://i.ytimg.com/vi/5Tia2oblJAg/default.jpg)
![CVPR 2020: D3S - A Discriminative Single Shot Segmentation Tracker](https://i.ytimg.com/vi/E3mN_hCRHu0/default.jpg)
![Single-Photon 3D Imaging with Deep Sensor Fusion | SIGGRAPH 2018](https://i.ytimg.com/vi/dg_73m4e_Js/default.jpg)
![Stanford Computational Imaging Lab - Overview 06/2020](https://i.ytimg.com/vi/VscA-ZvL1VA/default.jpg)
![DeepCap: Monocular Human Performance Capture Using Weak Supervision (CVPR 2020) - Oral](https://i.ytimg.com/vi/C4eDrvJ9aBs/default.jpg)
![DeepFit : 3D Surface Fitting via Neural Network Weighted Least Squares (ECCV 2020 Oral)](https://i.ytimg.com/vi/PrlFen2BuaU/default.jpg)
![Unsupervised Monocular Depth Estimation With Left-Right Consistency](https://i.ytimg.com/vi/jI1Qf7zMeIs/default.jpg)
![Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera](https://i.ytimg.com/vi/yHLyhdMSw7w/default.jpg)
![Wave-Based Non-Line-of-Sight Imaging using Fast f–k Migration | SIGGRAPH 2019](https://i.ytimg.com/vi/BVYfzLXUi48/default.jpg)
![World-Consistent Video-to-Video Synthesis (ECCV 2020)](https://i.ytimg.com/vi/rlCh6-2NfSg/default.jpg)
![End-to-end Optimization of Optics and Image Processing | SIGGRAPH 2018](https://i.ytimg.com/vi/iJdsxXOfqvw/default.jpg)
![Implicit Neural Representations with Periodic Activation Functions](https://i.ytimg.com/vi/Q2fLWGBeaiI/default.jpg)
![Holographic Near-Eye Displays Based on Overlap-Add Stereograms](https://i.ytimg.com/vi/6d20CSbGQb0/default.jpg)
![Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes](https://i.ytimg.com/vi/JYw6zlmeS38/default.jpg)
![Gaze-contingent Stereo Rendering for VR/AR | SIGGRAPH Asia 2020](https://i.ytimg.com/vi/SEDYJEe5v90/default.jpg)
![Contact and Human Dynamics from Monocular Video (ECCV 2020)](https://i.ytimg.com/vi/qR9KW6JzXX4/default.jpg)
![[CVPR 2020] Simple but Effective Image Enhancement Techniques](https://i.ytimg.com/vi/jofNIRZmREY/default.jpg)
![A U-Net Based Discriminator for Generative Adversarial Networks, CVPR 2020 (10 min overview)](https://i.ytimg.com/vi/BR9C4p3W9vw/default.jpg)
![SNE-RoadSeg (ECCV 2020)](https://i.ytimg.com/vi/wWrZhDuh6xc/default.jpg)