Загрузка страницы

Deep Fakes: Why Seeing Isn't Believing

Can we truly believe everything we see? Not anymore.

The last few years have seen a rise in “deep fakes” – manipulated videos where one person’s likeness is replaced with another’s, often with the intention of misleading viewers.

Deep fakes rely on a process called “deep learning” – a subset of machine learning – and are created using algorithms that function in a similar way to the human brain. They consume huge amounts of data and teach themselves to recognise patterns within it, all the while critiquing the output to detect flaws and improve techniques.

Whilst there are huge benefits to deep learning (as seen in the development of Google Translate, self-driving cars and image sorting), certain advancements in facial recognition technology serve to undermine the basic principles of objective truth that we take for granted.

Find out why seeing isn’t believing in the latest explainer video from iluli by Mike Lamb. Making sense of technology, one byte at a time. Learn more at https://iluli.eu

Видео Deep Fakes: Why Seeing Isn't Believing канала iluli by Mike Lamb
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
1 ноября 2020 г. 20:04:51
00:05:02
Яндекс.Метрика