Загрузка страницы

Inference in a Nonconceptual World, Brian Cantwell Smith

Brian Cantwell Smith, Reid Hoffman Professor of Artificial Intelligence and the Human, University of Toronto
Classical models of inference, such as those based on logic, take inference to be *conceptual* – i.e., to involve representations formed of terms, predicates, relation symbols, and the like.  Conceptual representation of this sort is assumed to reflect the structure of the world: objects of various types, exemplifying properties, standing in relations, grouped together in sets, etc.  These paired roughly algebraic assumptions (one epistemic, the other ontological) form the basis of classical logic and traditional AI (GOFAI).

In this talk, Professor Smith will argue that the world itself is not conceptual, in the sense of not consisting (at least au fond) of objects, properties, relations, etc.  That is, he will argue against the ontological assumption.  Rather, he believes that taking the world to consist of the familiar ontological furniture of objects, properties, etc. results from epistemic processes of abstraction and idealization.  Denser representations with so-called “nonconceptual content” can be closer to what is known as “ground truth”.  Deep learning models and other developments in contemporary AI can therefore be understood as initial steps to understand inference over surpassingly rich fields of undiscretized features.

Supported by the John Templeton Foundation

Видео Inference in a Nonconceptual World, Brian Cantwell Smith канала Yale University
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
17 декабря 2022 г. 6:19:55
01:40:59
Яндекс.Метрика