Загрузка страницы

VARI-SOUND: A Varifocal Lens for Sound

VARI-SOUND: A Varifocal Lens for Sound
Gianluca Memoli, Letizia Chisari, Jonathan P. Eccles, Mihai Caleap, Bruce W. Drinkwater, Sriram Subramanian

CHI '19: ACM CHI Conference on Human Factors in Computing Systems
Session: Sound-based Interaction

Abstract
Centuries of development in optics have given us passive devices (i.e. lenses, mirrors and filters) to enrich audience immersivity with light effects, but there is nothing similar for sound. Beam-forming in concert halls and outdoor gigs still requires a large number of speakers, while headphones are still the state-of-the-art for personalized audio immersivity in VR. In this work, we show how 3D printed acoustic meta-surfaces, assembled into the equivalent of optical systems, may offer a different solution. We demonstrate how to build them and how to use simple design tools, like the thin-lens equation, also for sound. We present some key acoustic devices, like a "collimator", to transform a standard computer speaker into an acoustic "spotlight"; and a "magnifying glass", to create sound sources coming from distinct locations than the speaker itself. Finally, we demonstrate an acoustic varifocal lens, discussing applications equivalent to auto-focus cameras and VR headsets and the limitations of the technology.

DOI:: https://doi.org/10.1145/3290605.3300713
WEB:: https://chi2019.acm.org/

Recorded at the ACM CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, May 4 - 9 2019

Видео VARI-SOUND: A Varifocal Lens for Sound канала ACM SIGCHI
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
24 июля 2019 г. 13:39:55
00:22:14
Яндекс.Метрика