Eccentricity-dependent Spatio-temporal Flicker Fusion for Foveated Graphics | SIGGRAPH 2021
Project website: https://www.computationalimaging.org/publications/cff/
Virtual and augmented reality (VR/AR) displays strive to provide a resolution, framerate and field of view that matches the perceptual capabilities of the human visual system, all while constrained by limited compute budgets and transmission bandwidths of wearable computing systems. Foveated graphics techniques have emerged that could achieve these goals by exploiting the falloff of spatial acuity in the periphery of the visual field. However, considerably less attention has been given to temporal aspects of human vision, which also vary across the retina. This is in part due to limitations of current eccentricity-dependent models of the visual system. We introduce a new model, experimentally measuring and computationally fitting eccentricity-dependent critical flicker fusion thresholds jointly for both space and time. In this way, our model is unique in enabling the prediction of temporal information that is imperceptible for a certain spatial frequency, eccentricity, and range of luminance levels. We validate our model with an image quality user study, and use it to predict potential bandwidth savings 7x higher than those afforded by current spatial-only foveated models. As such, this work forms the enabling foundation for new temporally foveated graphics techniques.
Видео Eccentricity-dependent Spatio-temporal Flicker Fusion for Foveated Graphics | SIGGRAPH 2021 канала Stanford Computational Imaging Lab
Virtual and augmented reality (VR/AR) displays strive to provide a resolution, framerate and field of view that matches the perceptual capabilities of the human visual system, all while constrained by limited compute budgets and transmission bandwidths of wearable computing systems. Foveated graphics techniques have emerged that could achieve these goals by exploiting the falloff of spatial acuity in the periphery of the visual field. However, considerably less attention has been given to temporal aspects of human vision, which also vary across the retina. This is in part due to limitations of current eccentricity-dependent models of the visual system. We introduce a new model, experimentally measuring and computationally fitting eccentricity-dependent critical flicker fusion thresholds jointly for both space and time. In this way, our model is unique in enabling the prediction of temporal information that is imperceptible for a certain spatial frequency, eccentricity, and range of luminance levels. We validate our model with an image quality user study, and use it to predict potential bandwidth savings 7x higher than those afforded by current spatial-only foveated models. As such, this work forms the enabling foundation for new temporally foveated graphics techniques.
Видео Eccentricity-dependent Spatio-temporal Flicker Fusion for Foveated Graphics | SIGGRAPH 2021 канала Stanford Computational Imaging Lab
Показать
Комментарии отсутствуют
Информация о видео
29 апреля 2021 г. 21:36:57
00:03:01
Другие видео канала
![AutoInt: Automatic Integration for Fast Neural Volume Rendering | CVPR 2021](https://i.ytimg.com/vi/GYxFYbih0PU/default.jpg)
![Technical Papers Preview: SIGGRAPH 2019](https://i.ytimg.com/vi/EhDr3Rs5fTU/default.jpg)
![SPH Snow - ACM SIGGRAPH 2020](https://i.ytimg.com/vi/iOUr536wtUs/default.jpg)
![Gaze-Contingent Ocular Parallax Rendering for Virtual Reality](https://i.ytimg.com/vi/FvBYYAObJNM/default.jpg)
![Efficient Geometry-aware 3D Generative Adversarial Networks | CVPR 2022](https://i.ytimg.com/vi/cXxEwI7QbKg/default.jpg)
![How Birds Really See the World](https://i.ytimg.com/vi/bG2y8dG2QIM/default.jpg)
![Event-based Near-eye Gaze Tracking at 10,000 Hz | IEEE VR 2021](https://i.ytimg.com/vi/izE7j1b95uI/default.jpg)
![Time Multiplexed Coded Aperture Imaging | ICCV 2021](https://i.ytimg.com/vi/-vPBqJ4V5nc/default.jpg)
![The science of skin color - Angela Koine Flynn](https://i.ytimg.com/vi/_r4c2NT4naQ/default.jpg)
![BACON: Band-limited Coordinate Networks for Multiscale Scene Representation | CVPR 2022](https://i.ytimg.com/vi/zIH3KUCgJEA/default.jpg)
![SIGGRAPH 2017 : Technical Papers Preview Trailer](https://i.ytimg.com/vi/5YvIHREdVX4/default.jpg)
![Keyhole Imaging | IEEE TCI 20201](https://i.ytimg.com/vi/Veo27qhrI20/default.jpg)
![Time-multiplexed Neural Holography | SIGGRAPH 2022](https://i.ytimg.com/vi/k2dg-Ckhk5Q/default.jpg)
![Critical flicker fusion stimulus](https://i.ytimg.com/vi/xW3oQUe6DEw/default.jpg)
![[SIGGRAPH 2021] Intersection-free Rigid Body Dynamics](https://i.ytimg.com/vi/J4KylEbFM8I/default.jpg)
![Technical Papers Preview: SIGGRAPH 2018](https://i.ytimg.com/vi/t952yS8tcg8/default.jpg)
![Partially-coherent Neural Holography | Science Advances 2021](https://i.ytimg.com/vi/Heegf1NvTdI/default.jpg)
![Neural Holography 3D | SIGGRAPH Asia 2021](https://i.ytimg.com/vi/EsxGnUd8Efs/default.jpg)
![SIGGRAPH 2020 | Learned Motion Matching](https://i.ytimg.com/vi/16CHDQK4W5k/default.jpg)
![Total Relighting: Learning to Relight Portraits for Background Replacement](https://i.ytimg.com/vi/KeebkkaZhhI/default.jpg)