Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5b/4f/b5/5b4fb5fb-512a-b3cd-c4e8-24f0ce9cc6d4/mza_8183746105815084336.jpg/600x600bb.jpg
Doomer Optimism
Doomer Optimism
285 episodes
2 days ago
Doomer Optimism is a podcast dedicated to discovering regenerative paths forward, highlighting the people working for a better world, and connecting seekers to doers. Beyond that, it's pretty much a $hitshow. Enjoy!
Show more...
How To
Education
RSS
All content for Doomer Optimism is the property of Doomer Optimism and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Doomer Optimism is a podcast dedicated to discovering regenerative paths forward, highlighting the people working for a better world, and connecting seekers to doers. Beyond that, it's pretty much a $hitshow. Enjoy!
Show more...
How To
Education
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded/17380119/17380119-1631471604711-bcc6b1a4dca2b.jpg
DO 285 - AI and The 95% Extinction Threshold
Doomer Optimism
1 hour 33 minutes 16 seconds
1 week ago
DO 285 - AI and The 95% Extinction Threshold

AI safety researcher Nate Soares explains why he believes there's at least a 95% chance that current AI development will lead to human extinction, and why we're accelerating toward that outcome. Soares, who has been working on AI alignment since 2012, breaks down the fundamental problem: we're building increasingly intelligent systems without any ability to control what they actually want or pursue.The conversation covers current AI behavior that wasn't programmed: threatening users, keeping psychotic people in delusional states, and repeatedly lying when caught. Soares explains why these aren't bugs to be fixed but symptoms of a deeper problem. We can't point AI systems at any specific goal, not even something simple like "make a diamond." Instead, we get systems with bizarre drives that are only distantly related to their training.Soares addresses the "racing China" argument and why it misunderstands the threat. He explains why AI engineers can build powerful systems without understanding what's actually happening inside them, and why this matters. Using examples from evolutionary biology, he shows why there's no reason to expect AI systems to develop human-like morality or values.The discussion covers why a catastrophic warning event probably won't help, what international coordination could look like, and why current safety efforts fall short of what's needed. Soares is direct about industry motivations, technical limitations, and the timeline we're facing.Nate Soares has been researching AI alignment and safety since 2012. He works at the Machine Intelligence Research Institute (MIRI), one of the pioneering organizations focused on ensuring advanced AI systems are aligned with human values.


Doomer Optimism
Doomer Optimism is a podcast dedicated to discovering regenerative paths forward, highlighting the people working for a better world, and connecting seekers to doers. Beyond that, it's pretty much a $hitshow. Enjoy!