Загрузка страницы

Real-Time Visual Localisation and Mapping with a Single Camera

In my work over the past five years I have generalised the Simultaneous Localisation and Mapping (SLAM) methodology of sequential probabilistic mapping, developed to enable mobile robots to navigate in unknown environments, to demonstrate real-time 3D motion estimation and visual scene mapping with an agile single camera. Via my MonoSLAM algorithm, a webcam attached to a laptop becomes a low cost but high-performance position and mapping sensor, which can be used by advanced mobile robots or coupled to a wearable computer for personal localisation. When hard real-time performance (e.g. 30Hz+) is required, the limited processing resources of practical computers mean that fundamental issues of uncertainty propagation and selective visual attention must be addressed via the rigorous application of methods from probabilistic inference. The result is an approach which harnesses background knowledge of the scenario with the aim of obtaining maximum value from visual processing. I will present recent advances in information-theoretic active search, mosaicing, feature initialization and surface estimation, and compare my work with other state of the art approaches to real-time tracking and mapping. Practially, this technology has a host of interesting potential applications in many areas of robotics, wearable computing and augmented reality. The presentation will include a live demonstration.

Видео Real-Time Visual Localisation and Mapping with a Single Camera канала Microsoft Research
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
6 сентября 2016 г. 22:52:50
01:09:41
Яндекс.Метрика