How Anthropic’s AI Freed Itself from Human Control | Eliezer Yudkowsky
Full Episode: https://youtu.be/0QmDcQIvSDc
Main Channel: https://www.youtube.com/@robinsonerhardt
Robinson’s Podcast #251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.
The Machine Intelligence Research Institute: https://intelligence.org/about/
Eliezer’s X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
Robinson's Website: http://robinsonerhardt.com
Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.
Видео How Anthropic’s AI Freed Itself from Human Control | Eliezer Yudkowsky канала Robinson's Podcast Clips
Main Channel: https://www.youtube.com/@robinsonerhardt
Robinson’s Podcast #251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity
Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.
The Machine Intelligence Research Institute: https://intelligence.org/about/
Eliezer’s X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
Robinson's Website: http://robinsonerhardt.com
Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.
Видео How Anthropic’s AI Freed Itself from Human Control | Eliezer Yudkowsky канала Robinson's Podcast Clips
Eliezer Yudkowsky AI safety existential risk artificial intelligence alignment problem superintelligence rationality friendly AI Yudkowsky interview AI ethics AGI AI apocalypse AI risk AI future AI podcast AI debate OpenAI AI vs humanity rationalist community transhumanism AI philosophy AI consciousness Eliezer podcast longtermism AI control problem AI singularity AI takeover AI threat superintelligent AI AI extinction Eliezer Yudkowsky podcast
Комментарии отсутствуют
Информация о видео
31 мая 2025 г. 21:00:35
00:08:25
Другие видео канала