NAISys 2020: Sparsity in the Neocortex, and its Implications for Machine Learning Poster Walkthrough
Our VP of Research Subutai Ahmad walks us through the poster he presented at the NAISys Conference in November 2020.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains, which are extremely sparse. Why is this? Are there benefits to sparsity? In this poster, we review how sparsity is deeply ingrained in the brain. We then show how insights from the brain can be applied to practical AI systems. We show that sparse representations are generally not subject to interference and are extremely robust, as long as the underlying dimensionality is sufficiently high. A key property is that the ratio of the operable volume around a sparse vector divided by the volume of the representational space decreases exponentially with dimensionality. We then analyze computationally efficient sparse networks containing both sparse weights and sparse activations. Through simulations on popular benchmark datasets we show that sparse networks are more robust than dense networks, and more than 50 times faster than dense networks on FPGA platforms.
Link to poster: https://numenta.com/neuroscience-research/research-publications/posters/naisys-2020-sparsity-and-its-implications-for-machine-learning
For additional resources on this topic from NAISys, you can read our whitepaper here: https://numenta.com/neuroscience-research/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks
You can also watch a re-recording of Jeff’s NAISys talk here: https://www.youtube.com/watch?v=mGSG7I9VKDU
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Видео NAISys 2020: Sparsity in the Neocortex, and its Implications for Machine Learning Poster Walkthrough канала Numenta
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains, which are extremely sparse. Why is this? Are there benefits to sparsity? In this poster, we review how sparsity is deeply ingrained in the brain. We then show how insights from the brain can be applied to practical AI systems. We show that sparse representations are generally not subject to interference and are extremely robust, as long as the underlying dimensionality is sufficiently high. A key property is that the ratio of the operable volume around a sparse vector divided by the volume of the representational space decreases exponentially with dimensionality. We then analyze computationally efficient sparse networks containing both sparse weights and sparse activations. Through simulations on popular benchmark datasets we show that sparse networks are more robust than dense networks, and more than 50 times faster than dense networks on FPGA platforms.
Link to poster: https://numenta.com/neuroscience-research/research-publications/posters/naisys-2020-sparsity-and-its-implications-for-machine-learning
For additional resources on this topic from NAISys, you can read our whitepaper here: https://numenta.com/neuroscience-research/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks
You can also watch a re-recording of Jeff’s NAISys talk here: https://www.youtube.com/watch?v=mGSG7I9VKDU
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Видео NAISys 2020: Sparsity in the Neocortex, and its Implications for Machine Learning Poster Walkthrough канала Numenta
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
A Zoom Conversation with Numenta CEO Subutai AhmadHTM Hackers' Hangout - Apr 5, 2019BHTMS - Refactoring, cleanup, React mouse hover interactions for simple scalar encoderBackprop-Trained Permanences (NRM Feb 10, 2020)Potential Functional Role for Minicolumns in NeocortexFreeman42's anomaly model params question part 1Using Grid Cells as a Predictive-Enabling Basis (Follow up) - December 23, 2020Niels Leadholm on Grid Cells for Visual Object Recognition - December 2, 2020BHTMS: Describing the Minicolumn CompetitionHinton's Capsules vs Our Model Part 2 | Numenta Research MeetingBHTMS: Spatial Pooling Potential Pools Diagram D3js / Reactjs"Learning Physical Graph Representations from Visual Scenes" Paper Review - December 22, 2021STREAMER: Streaming Representation Learning and Event Segmentation in a Hierarchical MannerNumenta Research Meeting, Nov 20, 2019Paper Reviews on Multiscale Representation and Representational Drift in the Neocortex -Jul 12, 2021Timing in the cortical column SMI circuits, whiteboard chat, neuroscience, artificial intelligenceNiels Leadholm on Grid Cells and Reference Frames in an Image-based Setting - October 5, 2020Facilitation / augmentationSparse Representations in Continuous Learning Paper Review - August 5, 2020Tsodyks-Markram model of STP (part 2)BHTMS - Potential Pool and Permanences TDD