Scott Aaronson - Avoiding Existential Risks: Better Safe than Sorry
Scott discusses the perils of predicting the timing of future technologies - we can say a lot more about what technological capabilities could ultimately be achievable be able to do than the exact timing of these technological capabilities - however, any projected timelines of particular technologies is downstream from whether we have a functioning civilization around to develop them. In wishing he could be more optimistic about the future, Scott discusses a turning away from enlightenment values and current political turmoil.
People used to make fun of how bad AI was, for instance machine translation and image recognition - that is until they started working. Scott doesn't think that Superintelligent AI will end up being as nearer term risk as things like climate change - though it's good to know people are now thinking about approaching mitigating AI risk - but how do we know we are making progress?
In the mean time we are dealing with relevant moral questions and how to implement solutions in AI (i.e. trolley problems in self driving cars)..
Relevant blog posts
- 'Better safe than sorry': https://www.scottaaronson.com/blog/?p=334 "As a concerned citizen of Planet Earth, I demand that the LHC begin operations as soon as possible, at as high energies as possible, and continue operating until such time as it is proven completely safe to turn it off."
- 'Quickies': https://www.scottaaronson.com/blog/?p=3553 "I suppose this is as good a place as any to say that my views on AI risk have evolved. A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game. But now that we know these things, I think intellectual honesty requires updating on them. And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.” "
Bio : Scott Aaronson is a theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
He blogs at Shtetl-Optimized: https://www.scottaaronson.com/blog/
#ExistentialRisk #XRisk #ArtificialIntelligence #AIRisk #ClimateChange #Prediction
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: https://www.patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
Видео Scott Aaronson - Avoiding Existential Risks: Better Safe than Sorry канала Science, Technology & the Future
People used to make fun of how bad AI was, for instance machine translation and image recognition - that is until they started working. Scott doesn't think that Superintelligent AI will end up being as nearer term risk as things like climate change - though it's good to know people are now thinking about approaching mitigating AI risk - but how do we know we are making progress?
In the mean time we are dealing with relevant moral questions and how to implement solutions in AI (i.e. trolley problems in self driving cars)..
Relevant blog posts
- 'Better safe than sorry': https://www.scottaaronson.com/blog/?p=334 "As a concerned citizen of Planet Earth, I demand that the LHC begin operations as soon as possible, at as high energies as possible, and continue operating until such time as it is proven completely safe to turn it off."
- 'Quickies': https://www.scottaaronson.com/blog/?p=3553 "I suppose this is as good a place as any to say that my views on AI risk have evolved. A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game. But now that we know these things, I think intellectual honesty requires updating on them. And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.” "
Bio : Scott Aaronson is a theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
He blogs at Shtetl-Optimized: https://www.scottaaronson.com/blog/
#ExistentialRisk #XRisk #ArtificialIntelligence #AIRisk #ClimateChange #Prediction
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: https://www.patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
Видео Scott Aaronson - Avoiding Existential Risks: Better Safe than Sorry канала Science, Technology & the Future
Показать
Комментарии отсутствуют
Информация о видео
16 января 2019 г. 19:36:24
00:21:28
Другие видео канала
![3 principles for creating safer AI | Stuart Russell](https://i.ytimg.com/vi/EBK-a94IFHY/default.jpg)
![Scott Aaronson - The Ghost in the Quantum Turing Machine](https://i.ytimg.com/vi/cscRdv57oRQ/default.jpg)
![Existential Risks and Extreme Opportunities | Stuart Armstrong | TEDxAthens](https://i.ytimg.com/vi/dzlxU3g7hUY/default.jpg)
![David Deutsch: A new way to explain explanation](https://i.ytimg.com/vi/folTvNDL08A/default.jpg)
![Scott Aaronson - The Winding Road to Quantum Supremacy](https://i.ytimg.com/vi/BvVciA5iXH4/default.jpg)
![Nick Bostrom - The Simulation Argument (Full)](https://i.ytimg.com/vi/nnl6nY8YKHs/default.jpg)
![58: Daniel Schmachtenberger - Solving The Causes of Existential Risks, Pt.2](https://i.ytimg.com/vi/zWpKVTyx8R0/default.jpg)
![Black Holes, Firewalls, and the Limits of Quantum Computers](https://i.ytimg.com/vi/cstKRACrMQY/default.jpg)
![The Role of Wonder in Science: A Conversation with Prof Brian Greene](https://i.ytimg.com/vi/KzWeTpT43-U/default.jpg)
![What Quantum Computing Isn't | Scott Aaronson | TEDxDresden](https://i.ytimg.com/vi/JvIbrDR1G_c/default.jpg)
![Avi's Permanent Impact on Me - Scott Aaronson](https://i.ytimg.com/vi/BLxC3rGeWBI/default.jpg)
![The Goodness of the Universe - John Smart](https://i.ytimg.com/vi/ydEuLxUTvTk/default.jpg)
![Panel - Lawrence Krauss, Ben Goertzel, Steve Omohundro - the Perils of Prediction](https://i.ytimg.com/vi/H7fGNHCIPN4/default.jpg)
![What's worrying Elon Musk? Existential Risk and Artificial Intelligence](https://i.ytimg.com/vi/rkN2iIozniw/default.jpg)
![Scott Aaronson - Physics of Free Will](https://i.ytimg.com/vi/fp02OJVGG20/default.jpg)
![Humanity on the Edge of Extinction | Anders Sandberg | TEDxVienna](https://i.ytimg.com/vi/O-WXOaAnipM/default.jpg)
![Ben Goertzel - Open Ended vs Closed Minded Conceptions of Superintelligence](https://i.ytimg.com/vi/4mEsJ3z6T_E/default.jpg)
![Nanoscale Machines: Building the Future with Molecules - with Neil Champness](https://i.ytimg.com/vi/NJW3KfjM2aw/default.jpg)
![Joscha Bach - Agency in an Age of Machines](https://i.ytimg.com/vi/uc112kET-i0/default.jpg)
![Prof. Scott Aaronson - Quantum Computing and the Limits of the Efficiently Computable](https://i.ytimg.com/vi/fuSZoh7EURI/default.jpg)