Symbols and Rules with Deep Learning - Ellie Pavlick | Stanford MLSys #54
Episode 54 of the Stanford MLSys Seminar Series!
Implementing Symbols and Rules with Neural Networks
Speaker: Ellie Pavlick
Abstract:
Many aspects of human language and reasoning are well explained in terms of symbols and rules. However, state-of-the-art computational models are based on large neural networks which lack explicit symbolic representations of the type frequently used in cognitive theories. One response has been the development of neuro-symbolic models which introduce explicit representations of symbols into neural network architectures or loss functions. In terms of Marr's levels of analysis, such approaches achieve symbolic reasoning at the computational level ("what the system does and why") by introducing symbols and rules at the implementation and algorithmic levels. In this talk, I will consider an alternative: can neural networks (without any explicit symbolic components) nonetheless implement symbolic reasoning at the computational level? I will describe several diagnostic tests of "symbolic" and "rule-governed" behavior and use these tests to analyze neural models of visual and language processing. Our results show that on many counts, neural models appear to encode symbol-like concepts (e.g., conceptual representations that are abstract, systematic, and modular), but not perfectly so. Analysis of the failure cases reveals that future work is needed on methodological tools for analyzing neural networks, as well as refinement of models of hybrid neuro-symbolic reasoning in humans, in order to determine whether neural networks' deviations from the symbolic paradigm are a feature or a bug.
Bio:
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to "reverse engineer" the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google.
--
0:00 Presentation
33:17 Discussion
Stanford MLSys Seminar hosts: Dan Fu, Karan Goel, Fiodar Kazhamiaka, and Piero Molino
Executive Producers: Matei Zaharia, Chris Ré
Twitter:
https://twitter.com/realDanFu
https://twitter.com/krandiash
https://twitter.com/w4nderlus7
--
Check out our website for the schedule: http://mlsys.stanford.edu
Join our mailing list to get weekly updates: https://groups.google.com/forum/#!forum/stanford-mlsys-seminars/join
#machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford #brown #lunarlab #google #symbols #rules #deeplearning
Видео Symbols and Rules with Deep Learning - Ellie Pavlick | Stanford MLSys #54 канала Stanford MLSys Seminars
Implementing Symbols and Rules with Neural Networks
Speaker: Ellie Pavlick
Abstract:
Many aspects of human language and reasoning are well explained in terms of symbols and rules. However, state-of-the-art computational models are based on large neural networks which lack explicit symbolic representations of the type frequently used in cognitive theories. One response has been the development of neuro-symbolic models which introduce explicit representations of symbols into neural network architectures or loss functions. In terms of Marr's levels of analysis, such approaches achieve symbolic reasoning at the computational level ("what the system does and why") by introducing symbols and rules at the implementation and algorithmic levels. In this talk, I will consider an alternative: can neural networks (without any explicit symbolic components) nonetheless implement symbolic reasoning at the computational level? I will describe several diagnostic tests of "symbolic" and "rule-governed" behavior and use these tests to analyze neural models of visual and language processing. Our results show that on many counts, neural models appear to encode symbol-like concepts (e.g., conceptual representations that are abstract, systematic, and modular), but not perfectly so. Analysis of the failure cases reveals that future work is needed on methodological tools for analyzing neural networks, as well as refinement of models of hybrid neuro-symbolic reasoning in humans, in order to determine whether neural networks' deviations from the symbolic paradigm are a feature or a bug.
Bio:
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, where she leads the Language Understanding and Representation (LUNAR) Lab, and a Research Scientist at Google. Her research focuses on building computational models of language that are inspired by and/or informative of language processing in humans. Currently, her lab is investigating the inner-workings of neural networks in order to "reverse engineer" the conceptual structures and reasoning strategies that these models use, as well as exploring the role of grounded (non-linguistic) signals for word and concept learning. Ellie's work is supported by DARPA, IARPA, NSF, and Google.
--
0:00 Presentation
33:17 Discussion
Stanford MLSys Seminar hosts: Dan Fu, Karan Goel, Fiodar Kazhamiaka, and Piero Molino
Executive Producers: Matei Zaharia, Chris Ré
Twitter:
https://twitter.com/realDanFu
https://twitter.com/krandiash
https://twitter.com/w4nderlus7
--
Check out our website for the schedule: http://mlsys.stanford.edu
Join our mailing list to get weekly updates: https://groups.google.com/forum/#!forum/stanford-mlsys-seminars/join
#machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford #brown #lunarlab #google #symbols #rules #deeplearning
Видео Symbols and Rules with Deep Learning - Ellie Pavlick | Stanford MLSys #54 канала Stanford MLSys Seminars
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Video Analysis in Hours, Not Weeks feat. Kayvon Fatahalian | Stanford MLSys Seminar Episode 8](https://i.ytimg.com/vi/u62aAtBCxEU/default.jpg)
![A data-centric view on reliable generalization - Ludwig Schmidt | Stanford MLSys #71](https://i.ytimg.com/vi/brHeIKX8ayw/default.jpg)
![Online A/B Testing of Real-Time Event Detection Systems - David Tagliamonti | Stanford MLSys #93](https://i.ytimg.com/vi/EOxxJYF1DI8/default.jpg)
![Pathways Language Model and Model Scaling - Aakanksha Chowdhery | Stanford MLSys #69](https://i.ytimg.com/vi/CV_eBVwzOaw/default.jpg)
![Lux: Visualization for Data Science - Doris Lee | Stanford MLSys #55](https://i.ytimg.com/vi/yrmSoU8jHnw/default.jpg)
![AI Systems in Government: Challenges & Opportunities - Jared Dunnmon | Stanford MLSys#100](https://i.ytimg.com/vi/CT21h9fU6V8/default.jpg)
![ML-Powered Databases feat. Lin Ma | Stanford MLSys Seminar Episode 20](https://i.ytimg.com/vi/sY1c7qqQeuA/default.jpg)
![Looper: An End-to-End ML Platform for Product Decisions - Igor Markov | Stanford MLSys #60](https://i.ytimg.com/vi/UAZHJK9VWPY/default.jpg)
![How to run a BILLION IoT devices w/ Mi Zhang | Stanford MLSys #41](https://i.ytimg.com/vi/xy4sbZ4ev2k/default.jpg)
![Professional Norms in Generative AI - Rob Reich | Stanford MLSys #74](https://i.ytimg.com/vi/6iyeV23d3Ig/default.jpg)
![Tiny ML, Harvard Style - Vijay Janapa Reddi | Stanford MLSys #57](https://i.ytimg.com/vi/489HEmRXzOE/default.jpg)
![Foundation Models on Consumer Devices - Tianqi Chen | Stanford MLSys #85](https://i.ytimg.com/vi/InoNMvjs_vo/default.jpg)
![Baharan Mirzasoleiman - How Structure Helps in Machine Learning](https://i.ytimg.com/vi/EI6k2g9CUHo/default.jpg)
![Stanford MLSys Seminar Episode 7: Matthias Poloczek on Bayesian Optimization](https://i.ytimg.com/vi/gpTxayP4CIU/default.jpg)
![Causal AI for Systems feat. Pooyan Jamshidi | Stanford MLSys Seminar Episode 38](https://i.ytimg.com/vi/csB_cF6MA9A/default.jpg)
![Interactive Model Development - Fait Poms | Stanford MLSys #56](https://i.ytimg.com/vi/-9LbJBzK2HQ/default.jpg)
![Declarative Machine Learning with Ludwig feat. Piero Molino | Stanford MLSys Seminar Episode 13](https://i.ytimg.com/vi/BTkl_qc0Plc/default.jpg)
![Large Language Models for Program Optimization - Osbert Bastani | Stanford MLSys #91](https://i.ytimg.com/vi/nvxeEEmc1ZM/default.jpg)
![Pixie CEO Zain Asgar - Benefits of a Systems Mind in ML and Startups](https://i.ytimg.com/vi/rZQ8Uj8Kph8/default.jpg)
![Building ML Models like Open-Source Software - Colin Raffel | Stanford MLSys #72](https://i.ytimg.com/vi/0oGxT_i7nk8/default.jpg)
![Reshaping ML with Compilers feat. Jason Knight | Stanford MLSys Seminar Episode 22](https://i.ytimg.com/vi/KN4r_oVpfI0/default.jpg)