Загрузка страницы

Agnieszka Grabska-Barwińska - Neuroscience-inspired analysis of machine learning

Recorded at the ML in PL 2019 Conference, the University of Warsaw, 22-24 November 2019.

Agnieszka Grabska-Barwińska (Google Deepmind)

Abstract:
Machine learning has recently proven its worth in solving problems that are notoriously difficult for humans, superseding us not just in games such as Chess or Go, but also in real world settings such as control (robots, power plants), recommendation and detection systems. As our machines scale to increasingly ambitious challenges, the importance of understanding their inner workings necessarily increases. Can we understand their decisions? Can we extract the algorithms they implicitly discover? Can we learn from them? Similar such questions have been at the forefront of Neuroscience research for decades. In this talk, I will draw upon many experiences gained during my personal academic journey, which started as a neuroscientist primarily focused on the question of "How does the brain work?" before transitioning into a career primary focused on understanding how artificial minds work. I will present a number of personal case studies dealing with the understanding of modern large-scale artificial intelligence systems, focusing particularly on whether neuroscience-inspired techniques can help us gain insight into the inner workings of artificial minds.
References:

Hubel and Wiesel, 1959
Receptive fields of single neurones in the cat's striate cortex. J Physiol. 1959 Oct; 148: 574-591
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/

A Grabska‐Barwińska et al., 2009
Contrast independence of cardinal preference: stable oblique effect in orientation maps of ferret visual cortex. European Journal of Neuroscience 29, 1258-1270
http://homepage.ruhr-uni-bochum.de/klaus-peter.hoffmann/pdf_hoffmann/grabska_ejn_09.pdf

Ohki et al., 2005
Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature 433, 597–603
https://www.nature.com/articles/nature03274

Mnih et al., 2015
Human-level control through deep reinforcement learning. Nature 518(7540), 529–533
https://www.nature.com/articles/nature14236

van Hasselt et al., 2015
Deep Reinforcement Learning with Double Q-learning
https://arxiv.org/abs/1509.06461

Kirkpatrick et al., 2017
Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 3521–3526
https://www.pnas.org/content/114/13/3521

J Veness et al., 2017
Online learning with gated linear networks. CoRR
https://arxiv.org/abs/1712.01897

Graves et al., 2016
Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476
https://www.nature.com/articles/nature20101

Wayne et al., 2018
Unsupervised predictive memory in a goal-directed agent. CoRR
https://arxiv.org/abs/1803.10760

Jaderberg et al., 2019
Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865
https://science.sciencemag.org/content/364/6443/859

Bapst et al., 2020
Unveiling the predictive power of static structure in glassy systems. Nature Physics 16, 448–454
https://www.nature.com/articles/s41567-020-0842-8

Видео Agnieszka Grabska-Barwińska - Neuroscience-inspired analysis of machine learning канала ML in PL
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
18 сентября 2020 г. 0:18:39
00:47:36
Другие видео канала
Rafał Pilarczyk: Is Artificial Intelligence a threat to musicians? – Music generation techniquesRafał Pilarczyk: Is Artificial Intelligence a threat to musicians? – Music generation techniquesPhilippe Preux - Bits of Reinforcement Learning | ML in PL 23Philippe Preux - Bits of Reinforcement Learning | ML in PL 23B. Ludwiczuk, K. Jasinska-Kobus (Allegro) - Batch construction strategies in deep metric learningB. Ludwiczuk, K. Jasinska-Kobus (Allegro) - Batch construction strategies in deep metric learningMarcin Andrychowicz - Solving Rubik’s Cube with a Robot HandMarcin Andrychowicz - Solving Rubik’s Cube with a Robot HandAdam Paszke: PyTorch 1.0: now and in the futureAdam Paszke: PyTorch 1.0: now and in the futureAdam Podraza: Applied time series forecasting using machine learningAdam Podraza: Applied time series forecasting using machine learningGül Varol - Learning human body representations from visual dataGül Varol - Learning human body representations from visual dataDavid Haber - Opportunities and Challenges when Building AI for Autonomous FlightDavid Haber - Opportunities and Challenges when Building AI for Autonomous FlightAdam Gonczarek (Alphamoon) – Intelligent Document ProcessingAdam Gonczarek (Alphamoon) – Intelligent Document ProcessingJonasz Pamuła (RTB House) – ML Challenges in cookieless worldJonasz Pamuła (RTB House) – ML Challenges in cookieless worldJoão Henriques - Mapping environments with deep networks and spatial memoriesJoão Henriques - Mapping environments with deep networks and spatial memoriesKrzysztof Geras (NYU): "Towards Solving Breast Cancer Screening Diagnosis with Deep Learning"Krzysztof Geras (NYU): "Towards Solving Breast Cancer Screening Diagnosis with Deep Learning"Stanisław Jastrzębski - Deep Learning in the Light of the Simplicity Bias | MLSS Kraków 2023Stanisław Jastrzębski - Deep Learning in the Light of the Simplicity Bias | MLSS Kraków 2023How to learn classifier chains using positive-unlabelled multi-label data? | ML in PL 22How to learn classifier chains using positive-unlabelled multi-label data? | ML in PL 22Yoshua Bengio – Cognitively-inspired inductive biases for higher-level cognitionYoshua Bengio – Cognitively-inspired inductive biases for higher-level cognitionTomasz Grel (Nvidia): Faster Deep Learning with mixed precision and multiple GPUsTomasz Grel (Nvidia): Faster Deep Learning with mixed precision and multiple GPUsPanel Discussion – Women in MLPanel Discussion – Women in MLMichał Jamroż - Class fitting in residual convolutional networks | ML in PL 23Michał Jamroż - Class fitting in residual convolutional networks | ML in PL 23Sebastian Cygert - Toward continually learning models | ML in PL 23Sebastian Cygert - Toward continually learning models | ML in PL 23Barbara Rychalska - Neural Machine Translation: achievements, challenges and the way forwardBarbara Rychalska - Neural Machine Translation: achievements, challenges and the way forwardStanisław Jastrzębski - Gradient Alignment: When Deep Networks Work, and When They Don'tStanisław Jastrzębski - Gradient Alignment: When Deep Networks Work, and When They Don't
Яндекс.Метрика