Agnieszka Grabska-Barwińska - Neuroscience-inspired analysis of machine learning
Recorded at the ML in PL 2019 Conference, the University of Warsaw, 22-24 November 2019.
Agnieszka Grabska-Barwińska (Google Deepmind)
Abstract:
Machine learning has recently proven its worth in solving problems that are notoriously difficult for humans, superseding us not just in games such as Chess or Go, but also in real world settings such as control (robots, power plants), recommendation and detection systems. As our machines scale to increasingly ambitious challenges, the importance of understanding their inner workings necessarily increases. Can we understand their decisions? Can we extract the algorithms they implicitly discover? Can we learn from them? Similar such questions have been at the forefront of Neuroscience research for decades. In this talk, I will draw upon many experiences gained during my personal academic journey, which started as a neuroscientist primarily focused on the question of "How does the brain work?" before transitioning into a career primary focused on understanding how artificial minds work. I will present a number of personal case studies dealing with the understanding of modern large-scale artificial intelligence systems, focusing particularly on whether neuroscience-inspired techniques can help us gain insight into the inner workings of artificial minds.
References:
Hubel and Wiesel, 1959
Receptive fields of single neurones in the cat's striate cortex. J Physiol. 1959 Oct; 148: 574-591
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/
A Grabska‐Barwińska et al., 2009
Contrast independence of cardinal preference: stable oblique effect in orientation maps of ferret visual cortex. European Journal of Neuroscience 29, 1258-1270
http://homepage.ruhr-uni-bochum.de/klaus-peter.hoffmann/pdf_hoffmann/grabska_ejn_09.pdf
Ohki et al., 2005
Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature 433, 597–603
https://www.nature.com/articles/nature03274
Mnih et al., 2015
Human-level control through deep reinforcement learning. Nature 518(7540), 529–533
https://www.nature.com/articles/nature14236
van Hasselt et al., 2015
Deep Reinforcement Learning with Double Q-learning
https://arxiv.org/abs/1509.06461
Kirkpatrick et al., 2017
Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 3521–3526
https://www.pnas.org/content/114/13/3521
J Veness et al., 2017
Online learning with gated linear networks. CoRR
https://arxiv.org/abs/1712.01897
Graves et al., 2016
Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476
https://www.nature.com/articles/nature20101
Wayne et al., 2018
Unsupervised predictive memory in a goal-directed agent. CoRR
https://arxiv.org/abs/1803.10760
Jaderberg et al., 2019
Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865
https://science.sciencemag.org/content/364/6443/859
Bapst et al., 2020
Unveiling the predictive power of static structure in glassy systems. Nature Physics 16, 448–454
https://www.nature.com/articles/s41567-020-0842-8
Видео Agnieszka Grabska-Barwińska - Neuroscience-inspired analysis of machine learning канала ML in PL
Agnieszka Grabska-Barwińska (Google Deepmind)
Abstract:
Machine learning has recently proven its worth in solving problems that are notoriously difficult for humans, superseding us not just in games such as Chess or Go, but also in real world settings such as control (robots, power plants), recommendation and detection systems. As our machines scale to increasingly ambitious challenges, the importance of understanding their inner workings necessarily increases. Can we understand their decisions? Can we extract the algorithms they implicitly discover? Can we learn from them? Similar such questions have been at the forefront of Neuroscience research for decades. In this talk, I will draw upon many experiences gained during my personal academic journey, which started as a neuroscientist primarily focused on the question of "How does the brain work?" before transitioning into a career primary focused on understanding how artificial minds work. I will present a number of personal case studies dealing with the understanding of modern large-scale artificial intelligence systems, focusing particularly on whether neuroscience-inspired techniques can help us gain insight into the inner workings of artificial minds.
References:
Hubel and Wiesel, 1959
Receptive fields of single neurones in the cat's striate cortex. J Physiol. 1959 Oct; 148: 574-591
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/
A Grabska‐Barwińska et al., 2009
Contrast independence of cardinal preference: stable oblique effect in orientation maps of ferret visual cortex. European Journal of Neuroscience 29, 1258-1270
http://homepage.ruhr-uni-bochum.de/klaus-peter.hoffmann/pdf_hoffmann/grabska_ejn_09.pdf
Ohki et al., 2005
Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature 433, 597–603
https://www.nature.com/articles/nature03274
Mnih et al., 2015
Human-level control through deep reinforcement learning. Nature 518(7540), 529–533
https://www.nature.com/articles/nature14236
van Hasselt et al., 2015
Deep Reinforcement Learning with Double Q-learning
https://arxiv.org/abs/1509.06461
Kirkpatrick et al., 2017
Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 3521–3526
https://www.pnas.org/content/114/13/3521
J Veness et al., 2017
Online learning with gated linear networks. CoRR
https://arxiv.org/abs/1712.01897
Graves et al., 2016
Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476
https://www.nature.com/articles/nature20101
Wayne et al., 2018
Unsupervised predictive memory in a goal-directed agent. CoRR
https://arxiv.org/abs/1803.10760
Jaderberg et al., 2019
Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865
https://science.sciencemag.org/content/364/6443/859
Bapst et al., 2020
Unveiling the predictive power of static structure in glassy systems. Nature Physics 16, 448–454
https://www.nature.com/articles/s41567-020-0842-8
Видео Agnieszka Grabska-Barwińska - Neuroscience-inspired analysis of machine learning канала ML in PL
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Rafał Pilarczyk: Is Artificial Intelligence a threat to musicians? – Music generation techniques](https://i.ytimg.com/vi/J1fUVeGM9Wg/default.jpg)
![Philippe Preux - Bits of Reinforcement Learning | ML in PL 23](https://i.ytimg.com/vi/BZc2GWoEddc/default.jpg)
![B. Ludwiczuk, K. Jasinska-Kobus (Allegro) - Batch construction strategies in deep metric learning](https://i.ytimg.com/vi/ZU9faIBxXbc/default.jpg)
![Marcin Andrychowicz - Solving Rubik’s Cube with a Robot Hand](https://i.ytimg.com/vi/zORaE0v6QBA/default.jpg)
![Adam Paszke: PyTorch 1.0: now and in the future](https://i.ytimg.com/vi/cNeAPzlPl7A/default.jpg)
![Adam Podraza: Applied time series forecasting using machine learning](https://i.ytimg.com/vi/Qb9ekWYIxcU/default.jpg)
![Gül Varol - Learning human body representations from visual data](https://i.ytimg.com/vi/aUwqZbupkbE/default.jpg)
![David Haber - Opportunities and Challenges when Building AI for Autonomous Flight](https://i.ytimg.com/vi/pssmL5x2RFM/default.jpg)
![Adam Gonczarek (Alphamoon) – Intelligent Document Processing](https://i.ytimg.com/vi/QvhznidHYFM/default.jpg)
![Jonasz Pamuła (RTB House) – ML Challenges in cookieless world](https://i.ytimg.com/vi/seS3Hqn7bpY/default.jpg)
![João Henriques - Mapping environments with deep networks and spatial memories](https://i.ytimg.com/vi/vc0HwO_AJ40/default.jpg)
![Krzysztof Geras (NYU): "Towards Solving Breast Cancer Screening Diagnosis with Deep Learning"](https://i.ytimg.com/vi/IUELwCM1Efs/default.jpg)
![Stanisław Jastrzębski - Deep Learning in the Light of the Simplicity Bias | MLSS Kraków 2023](https://i.ytimg.com/vi/PV2EAkcgc7o/default.jpg)
![How to learn classifier chains using positive-unlabelled multi-label data? | ML in PL 22](https://i.ytimg.com/vi/rCalVE-kNkQ/default.jpg)
![Yoshua Bengio – Cognitively-inspired inductive biases for higher-level cognition](https://i.ytimg.com/vi/02ABljCu5Zw/default.jpg)
![Tomasz Grel (Nvidia): Faster Deep Learning with mixed precision and multiple GPUs](https://i.ytimg.com/vi/zbBUExOG-To/default.jpg)
![Panel Discussion – Women in ML](https://i.ytimg.com/vi/GH1axtAEYhk/default.jpg)
![Michał Jamroż - Class fitting in residual convolutional networks | ML in PL 23](https://i.ytimg.com/vi/Zx5uG8VhDgA/default.jpg)
![Sebastian Cygert - Toward continually learning models | ML in PL 23](https://i.ytimg.com/vi/tf424AmS5ps/default.jpg)
![Barbara Rychalska - Neural Machine Translation: achievements, challenges and the way forward](https://i.ytimg.com/vi/0bZagsD7oPg/default.jpg)
![Stanisław Jastrzębski - Gradient Alignment: When Deep Networks Work, and When They Don't](https://i.ytimg.com/vi/9J0_jLyI5IE/default.jpg)