Building Neural Network Models That Can Reason
Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.
See more at https://www.microsoft.com/en-us/research/video/building-neural-network-models-that-can-reason/
Видео Building Neural Network Models That Can Reason канала Microsoft Research
See more at https://www.microsoft.com/en-us/research/video/building-neural-network-models-that-can-reason/
Видео Building Neural Network Models That Can Reason канала Microsoft Research
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Deep Learning 1: Introduction to Machine Learning Based AITextAttack: A Framework for Data Augmentation and Adversarial Training in NLPGary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43[IPAM2019] Thomas Kipf "Unsupervised Learning with Graph Neural Networks"NLP with Neural Networks & TransformersICLR Expo Talk: Neurosymbolic Hybrid AIThe Thousand Brains TheorySpiking Neural Networks for More Efficient AI AlgorithmsSecrets of the NOTHING GRINDERNeurosymbolic AI | MIT 6.S191Fireside Chat with Christopher ManningChristopher D Manning: A Neural Network Model That Can Reason (ICLR 2018 invited talk)The opportunities with AI and machine learningMike Davies: Realizing the Promise of Spiking Neuromorphic HardwareLecture 10 | Recurrent Neural NetworksJudea Pearl: "Interpretability and explainability from a causal lens"Yann LeCun: Can Neural Networks Reason? | AI Podcast ClipsAdvanced Machine Learning for Remote Sensing: Train neural networksEmbeddings for Everything: Search in the Neural Network EraRisi Kondor: "Fourier space neural networks"