Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/09/c2/24/09c224e2-b2fe-5784-2d88-51eeecbd310b/mza_6769889003506547070.jpg/600x600bb.jpg
Future of Life Institute Podcast
Future of Life Institute
252 episodes
1 day ago
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
RSS
All content for Future of Life Institute Podcast is the property of Future of Life Institute and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
https://img.transistor.fm/6FqTKU-94H5mmf16At2X_X-3xv76yAsEaZfl5mVbvp8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jMTli/Y2ViYTgzZjZhYThk/NmE3NzJkZWIwNDQ5/NjdiMS5qcGc.jpg
Can Defense in Depth Work for AI? (with Adam Gleave)
Future of Life Institute Podcast
1 hour 18 minutes
1 month ago
Can Defense in Depth Work for AI? (with Adam Gleave)

Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.


LINKS:
Adam Gleave - https://www.gleave.me
FAR.AI - https://www.far.ai
The Cognitive Revolution Podcast - https://www.cognitiverevolution.ai


PRODUCED BY:

https://aipodcast.ing


CHAPTERS:

(00:00) A Positive Post-AGI Vision
(10:07) Surviving Gradual Disempowerment
(16:34) Defining Powerful AIs
(27:02) Solving Continual Learning
(35:49) The Just-in-Time Safety Problem
(42:14) Can Defense-in-Depth Work?
(49:18) Fixing Alignment Problems
(58:03) Safer Training Formulas
(01:02:24) The Role of Interpretability
(01:09:25) FAR.AI's Vertically Integrated Approach
(01:14:14) Hiring at FAR.AI
(01:16:02) The Future of Governance


SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.