Загрузка страницы

Scott Aaronson - Avoiding Existential Risks: Better Safe than Sorry

Scott discusses the perils of predicting the timing of future technologies - we can say a lot more about what technological capabilities could ultimately be achievable be able to do than the exact timing of these technological capabilities - however, any projected timelines of particular technologies is downstream from whether we have a functioning civilization around to develop them. In wishing he could be more optimistic about the future, Scott discusses a turning away from enlightenment values and current political turmoil.
People used to make fun of how bad AI was, for instance machine translation and image recognition - that is until they started working. Scott doesn't think that Superintelligent AI will end up being as nearer term risk as things like climate change - though it's good to know people are now thinking about approaching mitigating AI risk - but how do we know we are making progress?
In the mean time we are dealing with relevant moral questions and how to implement solutions in AI (i.e. trolley problems in self driving cars)..

Relevant blog posts
- 'Better safe than sorry': https://www.scottaaronson.com/blog/?p=334 "As a concerned citizen of Planet Earth, I demand that the LHC begin operations as soon as possible, at as high energies as possible, and continue operating until such time as it is proven completely safe to turn it off."
- 'Quickies': https://www.scottaaronson.com/blog/?p=3553 "I suppose this is as good a place as any to say that my views on AI risk have evolved. A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game. But now that we know these things, I think intellectual honesty requires updating on them. And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.” "

Bio : Scott Aaronson is a theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
He blogs at Shtetl-Optimized: https://www.scottaaronson.com/blog/

#ExistentialRisk #XRisk #ArtificialIntelligence #AIRisk #ClimateChange #Prediction
Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: https://www.patreon.com/scifuture

c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
- Science, Technology & the Future

Видео Scott Aaronson - Avoiding Existential Risks: Better Safe than Sorry канала Science, Technology & the Future
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
16 января 2019 г. 19:36:24
00:21:28
Яндекс.Метрика