Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/09/c2/24/09c224e2-b2fe-5784-2d88-51eeecbd310b/mza_6769889003506547070.jpg/600x600bb.jpg
Future of Life Institute Podcast
Future of Life Institute
252 episodes
2 days ago
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
RSS
All content for Future of Life Institute Podcast is the property of Future of Life Institute and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
https://img.transistor.fm/-jndKC1d70l-dB__JmX9CqpSYvMrnz_qFF9vRS-N-Uk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NmRh/NDM0YWY3MDdhNTAx/YTE2MGVmNDlkYTE0/ODU1MS5qcGc.jpg
Why Building Superintelligence Means Human Extinction (with Nate Soares)
Future of Life Institute Podcast
1 hour 39 minutes
1 month ago
Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.


LINKS:
If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com
Machine Intelligence Research Institute -  https://intelligence.org
Nate Soares - https://intelligence.org/team/nate-soares/

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) Episode Preview

(01:05) Introduction and Book Discussion

(03:34) Psychology of AI Alarmism

(07:52) Intelligence Threshold Effects

(11:38) Growing vs Crafting AI

(18:23) Illusion of AI Control

(26:45) Why Iteration Won't Work

(34:35) The No Retries Problem

(38:22) Computer Security Lessons

(49:13) The Cursed Problem

(59:32) Multiple Curses and Complications

(01:09:44) AI's Infrastructure Advantage

(01:16:26) Grading Humanity's Response

(01:22:55) Time Needed for Solutions

(01:32:07) International Ban Necessity

SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.