On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained)
In this part, we go much more in-depth into the relationship between intelligence, generality, skill, experience, and prior knowledge and take a close look at what priors are built into humans. This will form the basis for comparing the intelligence of humans and AI systems.
OUTLINE:
0:00 - Intro & Recap
3:00 - Optimize for Generality
5:45 - Buying Skill with Data and Priors
12:40 - The Human Scope
17:30 - Human Priors
24:05 - Core Knowledge
28:50 - Comments & Conclusion
Paper: https://arxiv.org/abs/1911.01547
Tim Scarfe's Video: https://youtu.be/GpWLZUbPhr0
Abstract:
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Authors: François Chollet
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained) канала Yannic Kilcher
OUTLINE:
0:00 - Intro & Recap
3:00 - Optimize for Generality
5:45 - Buying Skill with Data and Priors
12:40 - The Human Scope
17:30 - Human Priors
24:05 - Core Knowledge
28:50 - Comments & Conclusion
Paper: https://arxiv.org/abs/1911.01547
Tim Scarfe's Video: https://youtu.be/GpWLZUbPhr0
Abstract:
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Authors: François Chollet
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained)On the Measure of Intelligence by François Chollet - Part 3: The Math (Paper Explained)Image GPT: Generative Pretraining from Pixels (Paper Explained)The Rise of Artificial General Intelligence - A.G.I[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)Francois Chollet - On the Measure Of IntelligenceLongformer: The Long-Document TransformerWhat is Intelligence? - François Chollet and Lex Fridman | AI Podcast ClipsTransCoder: Unsupervised Translation of Programming Languages (Paper Explained)How Technology might be about to Kill us.GPT-3: Language Models are Few-Shot Learners (Paper Explained)BERTology Meets Biology: Interpreting Attention in Protein Language Models (Paper Explained)On the Measure of Intelligence by François Chollet - Part 4: The ARC Challenge (Paper Explained)Artificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary)[ML News] Uber: Deep Learning for ETA | MuZero Video Compression | Block-NeRF | EfficientNet-XOnline Education - How I Make My VideosAI and Artificially Enhanced Brains - with Susan SchneiderLearning To Classify Images Without Labels (Paper Explained)Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)