Загрузка страницы

Yoshua Bengio - More Hardware-Friendly Deep Learning

Yoshua Bengio, one of the "deep neural networks founding fathers" and Full Professor in the Department of Computer Science and Operations Research at the University of Montreal, spoke at the ACM SIGARCH Workshop on Trends in Machine-Learning held on June 25th, 2017, in Toronto, as part of the ISCA 2017 Conference.

The slides for this talk are available at https://sites.google.com/view/isca-timl.

Видео Yoshua Bengio - More Hardware-Friendly Deep Learning канала ACM SIGARCH
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
26 октября 2017 г. 8:59:58
00:32:01
Другие видео канала
Brains@Bay Meetup - Alternatives to Backpropagation in Neural Networks (Nov 18, 2020)Brains@Bay Meetup - Alternatives to Backpropagation in Neural Networks (Nov 18, 2020)Prof. Yoshua Bengio - Deep learning & Backprop in the BrainProf. Yoshua Bengio - Deep learning & Backprop in the BrainEquilibrium approaches to deep learning: One (implicit) layer is all you needEquilibrium approaches to deep learning: One (implicit) layer is all you needQuantization Error in Neural NetworksQuantization Error in Neural NetworksBackpropagation and the brainBackpropagation and the brainNeurIPS 2020 Tutorial: Deep Implicit LayersNeurIPS 2020 Tutorial: Deep Implicit LayersYoshua Bengio, Université de Montréal: Bridging the gap between deep learning and neuroscienceYoshua Bengio, Université de Montréal: Bridging the gap between deep learning and neuroscienceTalk: Equilibrium Propagation for Complete Directed Neural NetworksTalk: Equilibrium Propagation for Complete Directed Neural NetworksEnergy-Efficient AIEnergy-Efficient AIBenjamin Scellier Telluride Day 3Benjamin Scellier Telluride Day 3ISCA'20 - Session 2A - Slipstream Processors Revisited: Exploiting Branch SetsISCA'20 - Session 2A - Slipstream Processors Revisited: Exploiting Branch SetsISCA'20 - Session 7B - Commutative Data Reordering: A New Technique to Reduce Data Movement EnergyISCA'20 - Session 7B - Commutative Data Reordering: A New Technique to Reduce Data Movement EnergyISCA'20 -- Timeloop/Accelergy Tutorial: Tools for Evaluating Deep Neural Network Accelerator DesignsISCA'20 -- Timeloop/Accelergy Tutorial: Tools for Evaluating Deep Neural Network Accelerator DesignsDivide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of NN'sDivide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of NN'sISCA'20 -- QRE Workshop: Quasilinear Time Decoding Algorithm for Topological Codes with High ErrorISCA'20 -- QRE Workshop: Quasilinear Time Decoding Algorithm for Topological Codes with High ErrorISCA'20 - Session 6B - TransForm: Formally Specifying Transistency Models and Synthesizing EnhancedISCA'20 - Session 6B - TransForm: Formally Specifying Transistency Models and Synthesizing EnhancedISCA'20 - Session 5B - Nested Enclave: Supporting Fine-Grained Hierarchical Isolation with SGXISCA'20 - Session 5B - Nested Enclave: Supporting Fine-Grained Hierarchical Isolation with SGXISCA'20 - Session 3B - uGEMM: Unary Computing Architecture for GEMM ApplicationsISCA'20 - Session 3B - uGEMM: Unary Computing Architecture for GEMM ApplicationsISCA'20 - Session 2A - MuonTrap: Preventing Cross-Domain Spectre-Like Attacks by Capturing SpeculatiISCA'20 - Session 2A - MuonTrap: Preventing Cross-Domain Spectre-Like Attacks by Capturing SpeculatiISCA'20 - Session 2A - Bouquet of Instruction Pointers: Instruction Pointer Classifier-Based SpatialISCA'20 - Session 2A - Bouquet of Instruction Pointers: Instruction Pointer Classifier-Based Spatial
Яндекс.Метрика