Загрузка страницы

Max Welling: Intelligence per Kilowatthour (ICML 2018 invited talk)

Abstract: In the 19th century the world was revolutionized because we could transform energy into useful work. The 21st century is revolutionized due to our ability to transform information (or data) into useful tools. Driven by Moore's law and the exponential growth of data, artificial intelligence is permeating every aspect of our lives. But intelligence is not for free, it costs energy, and therefore money. Evolution has faced this problem for millions of years and made brains about a 100x more energy efficient than modern hardware (or, as in the case of the sea-squirt, decided that it should eat its brain once is was no longer necessary). I will argue that energy will soon be one of the determining factors in AI. Either companies will find it too expensive to run energy hungry ML tools (such as deep learning) to power their AI engines, or the heat dissipation in edge devices will be too high to be safe. The next battleground in AI might well be a race for the most energy efficient combination of hardware and algorithms.

In this talk I will discuss some ideas that could address this problem. The technical hammer that I will exploit is the perfect reflection of the energy versus information balancing act we must address: the free energy, which is the expected energy minus the entropy of a system. Using the free energy we develop a Bayesian interpretation of deep learning which, with the appropriate sparsity inducing priors, can be used to prune both neurons and quantize parameters to low precision. The second hammer I will exploit is sigma-delta modulation (also known as herding) to introduce spiking into deep learning in an attempt to avoid computation in the absence of changes.

Видео Max Welling: Intelligence per Kilowatthour (ICML 2018 invited talk) канала Steven Van Vaerenbergh
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
14 августа 2018 г. 2:52:28
01:01:30
Другие видео канала
Frank Hutter and Joaquin Vanschoren: Automatic Machine Learning (NeurIPS 2018 Tutorial)Frank Hutter and Joaquin Vanschoren: Automatic Machine Learning (NeurIPS 2018 Tutorial)Dawn Song: AI and Security: Lessons, Challenges and Future Directions (ICML 2018 invited talk)Dawn Song: AI and Security: Lessons, Challenges and Future Directions (ICML 2018 invited talk)Tamara Broderick: Variational Bayes and Beyond: Bayesian Inference for Big Data (ICML 2018 tutorial)Tamara Broderick: Variational Bayes and Beyond: Bayesian Inference for Big Data (ICML 2018 tutorial)Demis Hassabis, CEO, DeepMind Technologies - The Theory of EverythingDemis Hassabis, CEO, DeepMind Technologies - The Theory of EverythingTransistors & The End of Moore's LawTransistors & The End of Moore's LawJuliaCon 2016 (Keynote) | Fortress Features and Lessons Learned | Guy SteeleJuliaCon 2016 (Keynote) | Fortress Features and Lessons Learned | Guy SteeleDeep Reinforcement Learning in the Enterprise: Bridging the Gap from Games to IndustryDeep Reinforcement Learning in the Enterprise: Bridging the Gap from Games to IndustryAI "Stop Button" Problem - ComputerphileAI "Stop Button" Problem - ComputerphileBuilding Machines that Learn & Think Like People - Prof. Josh Tenenbaum ICML2018Building Machines that Learn & Think Like People - Prof. Josh Tenenbaum ICML2018Artificial Intelligence per Kilowatt-hour: Max Welling, University of AmsterdamArtificial Intelligence per Kilowatt-hour: Max Welling, University of AmsterdamShawe-Taylor and Rivasplata: Statistical Learning Theory - a Hitchhiker's Guide (NeurIPS 2018)Shawe-Taylor and Rivasplata: Statistical Learning Theory - a Hitchhiker's Guide (NeurIPS 2018)Pedro Domingos: "The Master Algorithm" | Talks at GooglePedro Domingos: "The Master Algorithm" | Talks at GoogleBenjamin Recht: Optimization Perspectives on Learning to Control (ICML 2018 tutorial)Benjamin Recht: Optimization Perspectives on Learning to Control (ICML 2018 tutorial)The wonderful and terrifying implications of computers that can learn | Jeremy HowardThe wonderful and terrifying implications of computers that can learn | Jeremy HowardKoray Kavukcuoglu: From Generative Models to Generative Agents (ICLR 2018 invited talk)Koray Kavukcuoglu: From Generative Models to Generative Agents (ICLR 2018 invited talk)J. Z. Kolter and A. Madry: Adversarial Robustness - Theory and Practice (NeurIPS 2018 Tutorial)J. Z. Kolter and A. Madry: Adversarial Robustness - Theory and Practice (NeurIPS 2018 Tutorial)Josh Tenenbaum: Building Machines that Learn and Think Like People (ICML 2018 invited talk)Josh Tenenbaum: Building Machines that Learn and Think Like People (ICML 2018 invited talk)NVIDIA and Deep Learning Research with Bryan Catanzaro: GCPPodcast 119NVIDIA and Deep Learning Research with Bryan Catanzaro: GCPPodcast 119[DeepBayes2018]: Day 3, Invited talk 1. Advanced methods of variational inference[DeepBayes2018]: Day 3, Invited talk 1. Advanced methods of variational inferenceGoogle's Deep Mind Explained! - Self Learning A.I.Google's Deep Mind Explained! - Self Learning A.I.
Яндекс.Метрика