#041 - Biologically Plausible Neural Networks - Dr. Simon Stringer
Dr. Simon Stringer. Obtained his Ph.D in mathematical state space control theory and has been a Senior Research Fellow at Oxford University for over 27 years. Simon is the director of the the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, which is based within the Oxford University Department of Experimental Psychology. His department covers vision, spatial processing, motor function, language and consciousness -- in particular -- how the primate visual system learns to make sense of complex natural scenes. Dr. Stringers laboratory houses a team of theoreticians, who are developing computer models of a range of different aspects of brain function. Simon's lab is investigating the neural and synaptic dynamics that underpin brain function. An important matter here is the The feature-binding problem which concerns how the visual system represents the hierarchical relationships between features. the visual system must represent hierarchical binding relations across the entire visual field at every spatial scale and level in the hierarchy of visual primitives.
We discuss the emergence of self-organised behaviour, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics.
00:00:00 Tim Intro
00:09:31 Show kickoff
00:14:37 Hierarchical Feature binding and timing of action potentials
00:30:16 Hebb to Spike-timing-dependent plasticity (STDP)
00:35:27 Encoding of shape primitives
00:38:50 Is imagination working in the same place in the brain
00:41:12 Compare to supervised CNNs
00:45:59 Speech recognition, motor system, learning mazes
00:49:28 How practical are these spiking NNs
00:50:19 Why simulate the human brain
00:52:46 How much computational power do you gain from differential timings
00:55:08 Adversarial inputs
00:59:41 Generative / causal component needed?
01:01:46 Modalities of processing i.e. language
01:03:42 Understanding
01:04:37 Human hardware
01:06:19 Roadmap of NNs?
01:10:36 Intepretability methods for these new models
01:13:03 Won't GPT just scale and do this anyway?
01:15:51 What about trace learning and transformation learning
01:18:50 Categories of invariance
01:19:47 Biological plausibility
Pod version: https://anchor.fm/machinelearningstreettalk/episodes/041---Biologically-Plausible-Neural-Networks---Dr--Simon-Stringer-ept4db
https://www.neuroscience.ox.ac.uk/research-directory/simon-stringer
https://en.wikipedia.org/wiki/Simon_Stringer
https://www.linkedin.com/in/simon-stringer-a3b239b4/
"A new approach to solving the feature-binding problem in primate vision"
https://royalsocietypublishing.org/doi/10.1098/rsfs.2018.0021
James B. Isbister, Akihiro Eguchi, Nasir Ahmad, Juan M. Galeazzi, Mark J. Buckley and Simon Stringer
Simon's department is looking for funding, please do get in touch with him if you can facilitate this.
#machinelearning #neuroscience
Видео #041 - Biologically Plausible Neural Networks - Dr. Simon Stringer канала Machine Learning Street Talk
We discuss the emergence of self-organised behaviour, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics.
00:00:00 Tim Intro
00:09:31 Show kickoff
00:14:37 Hierarchical Feature binding and timing of action potentials
00:30:16 Hebb to Spike-timing-dependent plasticity (STDP)
00:35:27 Encoding of shape primitives
00:38:50 Is imagination working in the same place in the brain
00:41:12 Compare to supervised CNNs
00:45:59 Speech recognition, motor system, learning mazes
00:49:28 How practical are these spiking NNs
00:50:19 Why simulate the human brain
00:52:46 How much computational power do you gain from differential timings
00:55:08 Adversarial inputs
00:59:41 Generative / causal component needed?
01:01:46 Modalities of processing i.e. language
01:03:42 Understanding
01:04:37 Human hardware
01:06:19 Roadmap of NNs?
01:10:36 Intepretability methods for these new models
01:13:03 Won't GPT just scale and do this anyway?
01:15:51 What about trace learning and transformation learning
01:18:50 Categories of invariance
01:19:47 Biological plausibility
Pod version: https://anchor.fm/machinelearningstreettalk/episodes/041---Biologically-Plausible-Neural-Networks---Dr--Simon-Stringer-ept4db
https://www.neuroscience.ox.ac.uk/research-directory/simon-stringer
https://en.wikipedia.org/wiki/Simon_Stringer
https://www.linkedin.com/in/simon-stringer-a3b239b4/
"A new approach to solving the feature-binding problem in primate vision"
https://royalsocietypublishing.org/doi/10.1098/rsfs.2018.0021
James B. Isbister, Akihiro Eguchi, Nasir Ahmad, Juan M. Galeazzi, Mark J. Buckley and Simon Stringer
Simon's department is looking for funding, please do get in touch with him if you can facilitate this.
#machinelearning #neuroscience
Видео #041 - Biologically Plausible Neural Networks - Dr. Simon Stringer канала Machine Learning Street Talk
Показать
Комментарии отсутствуют
Информация о видео
4 февраля 2021 г. 1:43:14
01:27:06
Другие видео канала
![Kernels!](https://i.ytimg.com/vi/y_RjsDHl5Y4/default.jpg)
![Visual Processing and the Visual Cortex](https://i.ytimg.com/vi/MgMNUne9j9c/default.jpg)
![Matt Botvinick - Holy Grail Questions at the Intersection of Neuroscience and AI](https://i.ytimg.com/vi/_poHJEbHstQ/default.jpg)
![David Pearce - The Binding Problem of Consciousness](https://i.ytimg.com/vi/8xizbtklciA/default.jpg)
![The $1 TRILLION Artificial Intelligence Problem](https://i.ytimg.com/vi/hOMZebDXQsU/default.jpg)
![In the Age of AI (full film) | FRONTLINE](https://i.ytimg.com/vi/5dZ_lvDgevk/default.jpg)
![Dr Gyorgy Buzsaki @ YorkU/Gairdner Symposium Neural Plasticity: Synapses to circuits](https://i.ytimg.com/vi/DPUCwMgtUpU/default.jpg)
![How (and why) to raise e to the power of a matrix | DE6](https://i.ytimg.com/vi/O85OWBJ2ayo/default.jpg)
![Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36](https://i.ytimg.com/vi/SGSOCuByo24/default.jpg)
![#031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)](https://i.ytimg.com/vi/iccd86vOz3w/default.jpg)
![What is backpropagation really doing? | Chapter 3, Deep learning](https://i.ytimg.com/vi/Ilg3gGewQ5U/default.jpg)
![#045 Microsoft's Platform for Reinforcement Learning (Bonsai)](https://i.ytimg.com/vi/zxs0wM-N1sQ/default.jpg)
![#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)](https://i.ytimg.com/vi/BwhBtvCNwxo/default.jpg)
![From retina to cortex: An unexpected division of labor.](https://i.ytimg.com/vi/NuM-mxJf9MQ/default.jpg)
![The future we're building -- and boring | Elon Musk](https://i.ytimg.com/vi/zIwLWfaAg-8/default.jpg)
![Francois Chollet - On the Measure Of Intelligence](https://i.ytimg.com/vi/mEVnu-KZjq4/default.jpg)
![#040 - Adversarial Examples (Dr. Nicholas Carlini, Dr. Wieland Brendel, Florian Tramèr)](https://i.ytimg.com/vi/2PenK06tvE4/default.jpg)
![Physicist Michio Kaku: Science is the Engine of Prosperity!](https://i.ytimg.com/vi/AAEB-5GOCJ4/default.jpg)
![Deep Learning: A Crash Course](https://i.ytimg.com/vi/r0Ogt-q956I/default.jpg)
![Synthetic Biology: An Emerging Engineering Discipline - Timothy Lu](https://i.ytimg.com/vi/5_z1gG-m96A/default.jpg)