From Artificial Intelligence to Superintelligence: Nick Bostrom on AI & The Future of Humanity
Artificial Superintelligence or ASI, sometimes referred to as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds. AI is a rapidly growing field of technology with the potential to make huge improvements in human wellbeing. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks.
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when or how this will happen.
One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:
- Intelligence is a product of information processing in physical systems.
- We will continue to improve our intelligent machines.
- We do not stand on the peak of intelligence or anywhere near it.
Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have.
Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals. The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”.
Elon Musk has also warned that the global race toward AI could result in a third world war.
To avoid the ‘worst mistake in history’, it is necessary to understand the nature of an AI race, as well as escape the development that could lead to unfriendly Artificial Superintelligence.
To ensure the friendly nature of artificial superintelligence, world leaders should work to ensure that this ASI is beneficial to the entire human race.
#AI #ASI #AGI
SUBSCRIBE to our channel "Science Time": https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch
Sources:
Nick Bostrom Ted Talk: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=0s
https://www.youtube.com/watch?v=h0962biiZa4&t=0s
Видео From Artificial Intelligence to Superintelligence: Nick Bostrom on AI & The Future of Humanity канала Science Time
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when or how this will happen.
One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:
- Intelligence is a product of information processing in physical systems.
- We will continue to improve our intelligent machines.
- We do not stand on the peak of intelligence or anywhere near it.
Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have.
Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals. The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”.
Elon Musk has also warned that the global race toward AI could result in a third world war.
To avoid the ‘worst mistake in history’, it is necessary to understand the nature of an AI race, as well as escape the development that could lead to unfriendly Artificial Superintelligence.
To ensure the friendly nature of artificial superintelligence, world leaders should work to ensure that this ASI is beneficial to the entire human race.
#AI #ASI #AGI
SUBSCRIBE to our channel "Science Time": https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch
Sources:
Nick Bostrom Ted Talk: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=0s
https://www.youtube.com/watch?v=h0962biiZa4&t=0s
Видео From Artificial Intelligence to Superintelligence: Nick Bostrom on AI & The Future of Humanity канала Science Time
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Artificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary)In the Age of AI (full film) | FRONTLINEElon Musk: Can Superintelligent AI Help us Reach Type 1 Civilization?What happens when our computers get smarter than we are? | Nick BostromHow Far is Too Far? | The Age of A.I.Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83TIMELAPSE OF FUTURE TECHNOLOGY: 2022 - 4000+Will Self-Taught, A.I. Powered Robots Be the End of Us?The Rise of SpaceX Elon Musk's Engineering MasterpieceArtificial Intelligence: Top 10 Super Abilities That Will Destroy UsElon Musk: Superintelligent AI is an Existential Risk to HumanityAn Imminent Threat from Artificial Intelligence | Aidan Gomez | TEDxOxfordHow A.I. is searching for Aliens | The Age of A.I.The Age of Artificial Intelligence: the DocumentaryElon Musk on Artificial IntelligenceQuantum Supremacy & AI, with Stephen Fry.The future we're building -- and boring | Elon MuskMachine Learning: Living in the Age of AI | A WIRED FilmIs AI a species-level threat to humanity? | Elon Musk, Michio Kaku, Steven Pinker & more | Big ThinkThe Multiverse Hypothesis Explained by Neil deGrasse Tyson