Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Sports
Health & Fitness
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9b/2a/b2/9b2ab234-719e-35c1-f1a1-c85bae265396/mza_12167146219611799861.jpg/600x600bb.jpg
Branches of Philosophy Podcast
Philosophy Cognitive Science
211 episodes
1 day ago
Ai Generated. Human edited. Introductions and summaries of important books in philosophy and the interdisciplinary cognitive sciences. Modified and curated to improve listening experience. This channel not eligible for monetization due to YouTube's "reused content" policy. If you'd like to help support us on Patreon.
Show more...
Philosophy
Society & Culture
RSS
All content for Branches of Philosophy Podcast is the property of Philosophy Cognitive Science and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ai Generated. Human edited. Introductions and summaries of important books in philosophy and the interdisciplinary cognitive sciences. Modified and curated to improve listening experience. This channel not eligible for monetization due to YouTube's "reused content" policy. If you'd like to help support us on Patreon.
Show more...
Philosophy
Society & Culture
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42085884/42085884-1758896276841-0817622f3d267.jpg
[215] If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All E. Yudkowsky N. Soares
Branches of Philosophy Podcast
59 minutes 44 seconds
1 week ago
[215] If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All E. Yudkowsky N. Soares

Ai generated & human edited. Introduction and summary of "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" By Eliezer Yudkowsky, Nate Soares 2025

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

Branches of Philosophy Podcast
Ai Generated. Human edited. Introductions and summaries of important books in philosophy and the interdisciplinary cognitive sciences. Modified and curated to improve listening experience. This channel not eligible for monetization due to YouTube's "reused content" policy. If you'd like to help support us on Patreon.