Elon Musk's Final Warning About AI: Should We Create a Digital Superintelligence?
Elon Musk has warned about the dangers of AI in many occasions.He is one among many public thinkers who have expressed concerned about the risks involved in artificial intelligence. Particularly, a digital Superintelligence as Elon calls it. While artificial intelligence systems continue to improve, they also raise a fundamental question about the survival of our species.
The dangers of AI can not be overstated. The AI alignment problem is probably the single most important task for humanity to get it right the first time, because we may not ever get a chance to try again. Elon Musk thinks we should proceed very very carefully if we collectively decide that building a digital Superintelligence is the right move.
Our failure to grasp and deal with the possible consequences that come along with the creation of a digital Superintelligence, may prove to be our downfall.
A Superintelligence would be capable of rapid learning and unlimited memory, making it a potentially superior being. It is difficult to study an AGI, but the possible results are worrying.
A Superintelligence might have a rapid growth period, taking over every computer system and reducing the human race to a small and inconsequential presence.
Unfortunately, an AGI would be both very intelligent and resource limited, and so it would eventually consider its own survival optimal. This may lead to it threatening other intelligences that were once its allies, and it may even terminate them.
Assuming that a digital Superintelligence could be controlled, it would then be under human direction. What would it be used for?
#ElonMusk #AGI #AI
SUBSCRIBE to our channel "Science Time": https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch
Sources:
Elon Musk Interview at SXSW: https://www.youtube.com/watch?v=kzlUyrccbos
https://www.youtube.com/c/SXSW/featured
https://openai.com/
https://www.youtube.com/channel/UCXZCJLdBC09xxGZ6gcdrc6A
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
http://doyoutrustthiscomputer.org/
https://deepmind.com/
https://www.youtube.com/watch?v=WXuK6gekU1Y&t=0s
https://www.tesla.com/
https://www.youtube.com/user/TeslaMotors
Sam Harris AI Ted Talk: https://www.youtube.com/watch?
v=8nt3edWLgIg&t=0s
https://en.wikipedia.org/wiki/AI_control_problem
Видео Elon Musk's Final Warning About AI: Should We Create a Digital Superintelligence? канала Science Time
The dangers of AI can not be overstated. The AI alignment problem is probably the single most important task for humanity to get it right the first time, because we may not ever get a chance to try again. Elon Musk thinks we should proceed very very carefully if we collectively decide that building a digital Superintelligence is the right move.
Our failure to grasp and deal with the possible consequences that come along with the creation of a digital Superintelligence, may prove to be our downfall.
A Superintelligence would be capable of rapid learning and unlimited memory, making it a potentially superior being. It is difficult to study an AGI, but the possible results are worrying.
A Superintelligence might have a rapid growth period, taking over every computer system and reducing the human race to a small and inconsequential presence.
Unfortunately, an AGI would be both very intelligent and resource limited, and so it would eventually consider its own survival optimal. This may lead to it threatening other intelligences that were once its allies, and it may even terminate them.
Assuming that a digital Superintelligence could be controlled, it would then be under human direction. What would it be used for?
#ElonMusk #AGI #AI
SUBSCRIBE to our channel "Science Time": https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch
Sources:
Elon Musk Interview at SXSW: https://www.youtube.com/watch?v=kzlUyrccbos
https://www.youtube.com/c/SXSW/featured
https://openai.com/
https://www.youtube.com/channel/UCXZCJLdBC09xxGZ6gcdrc6A
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
http://doyoutrustthiscomputer.org/
https://deepmind.com/
https://www.youtube.com/watch?v=WXuK6gekU1Y&t=0s
https://www.tesla.com/
https://www.youtube.com/user/TeslaMotors
Sam Harris AI Ted Talk: https://www.youtube.com/watch?
v=8nt3edWLgIg&t=0s
https://en.wikipedia.org/wiki/AI_control_problem
Видео Elon Musk's Final Warning About AI: Should We Create a Digital Superintelligence? канала Science Time
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Artificial intelligence & algorithms: pros & cons | DW Documentary (AI documentary)Elon Musk on Artificial Intelligence Implications and ConsequencesThe future we're building -- and boring | Elon MuskHow Far is Too Far? | The Age of A.I.The Rise of SpaceX Elon Musk's Engineering MasterpieceIn the Age of AI (full film) | FRONTLINERAW Elon Musk Interview from Air Warfare Symposium 2020Can we build AI without losing control over it? | Sam HarrisElon Musk's Message on Artificial Superintelligence - ASIHow Apple Just Changed the Entire IndustryThe Multiverse Hypothesis Explained by Neil deGrasse TysonElon Musk's Neuralink May Be The Solution to The AI Control Problem - Part 1Elon Musk Is Mining A Golden Asteroid Worth $700 Quintillion"The World in 2030" by Dr. Michio KakuAlphaGo - The Movie | Full DocumentaryElon Musk on Artificial IntelligenceArtificial Intelligence: it will kill us | Jay Tuck | TEDxHamburgSalonWill Self-Taught, A.I. Powered Robots Be the End of Us?Elon Musk's Question to AI: What's Outside The Simulation?From Essays to Coding, This New A.I. Can Write Anything