Загрузка страницы

Deep image reconstruction from human brain activity (Paper Explained)

Can you peek into people's brains? Reading human thoughts is a long-standing dream of the AI field. This paper reads fMRI signals from a person and then reconstructs what that person's eyes currently see. This is achieved by translating the fMRI signal to features of a Deep Neural Network and then iteratively optimizing the input of the network to match those features. The results are impressive.

OUTLINE:
0:00 - Overview
1:35 - Pipeline
4:00 - Training
5:20 - Image Reconstruction
7:00 - Deep Generator Network
8:15 - Results

Paper: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633
My Video on OpenAI Microscope (what I called Atlas): https://youtu.be/Ok44otx90D4

Abstract:
The mental contents of perception and imagery are thought to be encoded in hierarchical representations in the brain, but previous attempts to visualize perceptual contents have failed to capitalize on multiple levels of the hierarchy, leaving it challenging to reconstruct internal imagery. Recent work showed that visual cortical activity measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into the hierarchical features of a pre-trained deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features. Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that our method was able to reliably produce reconstructions that resembled the viewed natural images. A natural image prior introduced by a deep generator neural network effectively rendered semantically meaningful details to the reconstructions. Human judgment of the reconstructions supported the effectiveness of combining multiple DNN layers to enhance the visual quality of generated images. While our model was solely trained with natural images, it successfully generalized to artificial shapes, indicating that our model was not simply matching to exemplars. The same analysis applied to mental imagery demonstrated rudimentary reconstructions of the subjective content. Our results suggest that our method can effectively combine hierarchical neural representations to reconstruct perceptual and subjective images, providing a new window into the internal contents of the brain.

Authors: Guohua Shen, Tomoyasu Horikawa, Kei Majima, Yukiyasu Kamitani

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher

Видео Deep image reconstruction from human brain activity (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
25 мая 2020 г. 20:38:17
00:17:24
Другие видео канала
WHO ARE YOU? 10k Subscribers Special (w/ Channel Analytics)WHO ARE YOU? 10k Subscribers Special (w/ Channel Analytics)Datasets for Data-Driven Reinforcement LearningDatasets for Data-Driven Reinforcement LearningReinforcement Learning with Augmented Data (Paper Explained)Reinforcement Learning with Augmented Data (Paper Explained)The Odds are Odd: A Statistical Test for Detecting Adversarial ExamplesThe Odds are Odd: A Statistical Test for Detecting Adversarial ExamplesExpire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained)Expire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained)REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)[Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained)[Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained)Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained)Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained)Gradient Origin Networks (Paper Explained w/ Live Coding)Gradient Origin Networks (Paper Explained w/ Live Coding)Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolationALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolationListening to You! - Channel Update (Author Interviews)Listening to You! - Channel Update (Author Interviews)On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained)On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained)[ML News] Uber: Deep Learning for ETA | MuZero Video Compression  | Block-NeRF | EfficientNet-X[ML News] Uber: Deep Learning for ETA | MuZero Video Compression | Block-NeRF | EfficientNet-XGrowing Neural Cellular AutomataGrowing Neural Cellular Automata[ML News] DeepMind's Flamingo Image-Text model | Locked-Image Tuning | Jurassic X & MRKL[ML News] DeepMind's Flamingo Image-Text model | Locked-Image Tuning | Jurassic X & MRKLAvoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments (Review)Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments (Review)AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)SupSup: Supermasks in Superposition (Paper Explained)SupSup: Supermasks in Superposition (Paper Explained)
Яндекс.Метрика