Josh Tenenbaum: Building Machines that Learn and Think Like People (ICML 2018 invited talk)
Abstract:
Recent successes in artificial intelligence and machine learning have been largely driven by methods for sophisticated pattern recognition, including deep neural networks and other data-intensive methods. But human intelligence is more than just pattern recognition. And no machine system yet built has anything like the flexible, general-purpose commonsense grasp of the world that we can see in even a one-year-old human infant. I will consider how we might capture the basic learning and thinking abilities humans possess from early childhood, as one route to building more human-like forms of machine learning and thinking.
At the heart of human common sense is our ability to model the physical and social environment around us: to explain and understand what we see, to imagine things we could see but haven’t yet, to solve problems and plan actions to make these things real, and to build new models as we learn more about the world. I will focus on our recent work reverse-engineering these capacities using methods from probabilistic programming, program induction and program synthesis, which together with deep learning methods and video game simulation engines, provide a toolkit for the joint enterprise of modeling human intelligence and making AI systems smarter in more human-like ways.
Presented by Josh Tenenbaum
https://icml.cc/Conferences/2018/Schedule?showEvent=1867
Видео Josh Tenenbaum: Building Machines that Learn and Think Like People (ICML 2018 invited talk) канала Steven Van Vaerenbergh
Recent successes in artificial intelligence and machine learning have been largely driven by methods for sophisticated pattern recognition, including deep neural networks and other data-intensive methods. But human intelligence is more than just pattern recognition. And no machine system yet built has anything like the flexible, general-purpose commonsense grasp of the world that we can see in even a one-year-old human infant. I will consider how we might capture the basic learning and thinking abilities humans possess from early childhood, as one route to building more human-like forms of machine learning and thinking.
At the heart of human common sense is our ability to model the physical and social environment around us: to explain and understand what we see, to imagine things we could see but haven’t yet, to solve problems and plan actions to make these things real, and to build new models as we learn more about the world. I will focus on our recent work reverse-engineering these capacities using methods from probabilistic programming, program induction and program synthesis, which together with deep learning methods and video game simulation engines, provide a toolkit for the joint enterprise of modeling human intelligence and making AI systems smarter in more human-like ways.
Presented by Josh Tenenbaum
https://icml.cc/Conferences/2018/Schedule?showEvent=1867
Видео Josh Tenenbaum: Building Machines that Learn and Think Like People (ICML 2018 invited talk) канала Steven Van Vaerenbergh
Показать
Комментарии отсутствуют
Информация о видео
4 сентября 2018 г. 18:37:22
01:12:24
Другие видео канала
Suchi Saria: Augmenting Clinical Intelligence with Machine Intelligence (ICLR 2018 invited talk)Evolution of CleverHans (May 15th, 2018)Erik Brynjolfsson: What Can Machine Learning Do? Workforce Implications (ICLR 2018)Machine Learning in Automated Mechanism Design for Pricing and Auctions (ICML 2018 tutorial)Max Welling: Intelligence per Kilowatthour (ICML 2018 invited talk)Using D-ID to create a talking avatar videoPieter Abbeel: Deep Learning for Robotics (NIPS 2017 Keynote)Probabilistic Methods, Applications sessions at NIPS 2017Demo of ChatGPT's visual capabilities (Oct. 2023)Yisong Yue and Hoang M Le: Tutorial on Imitation Learning (ICML 2018 tutorial)Truyen Tran - Learning to Remember More with Less Memorization (ICLR 2019 talk)Geometric reasoning with ChatGPT and GeoGebra, part 1Deep Learning session at NIPS 2017Michael Unser: Splines and Machine Learning: From classical RKHS methods to DNN (MLSP 2020 keynote)Kristen Grauman: Visual Learning With Unlabeled Video and Look-Around Policies ICLR2018 invited talkYikang Shen: Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks (ICLR2019)GitHub Copilot demonstration October 2023 (part 1)Copilot demonstration April 2023Joelle Pineau: Reproducibility, Reusability, and Robustness in Deep Reinforcement Learning ICLR 2018ChatGPT 4 system prompt (December 16, 2023)Eight years of scikit-learn development (Jan. 11th 2018)