Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/09/c2/24/09c224e2-b2fe-5784-2d88-51eeecbd310b/mza_6769889003506547070.jpg/600x600bb.jpg
Future of Life Institute Podcast
Future of Life Institute
251 episodes
21 hours ago
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
RSS
All content for Future of Life Institute Podcast is the property of Future of Life Institute and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Show more...
Technology
Episodes (20/251)
Future of Life Institute Podcast
Can Machines Be Truly Creative? (with Maya Ackerman)

Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.

LINKS:
- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman
- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/


PRODUCED BY:

https://aipodcast.ing


CHAPTERS:

(00:00) Episode Preview

(01:00) Defining Human Creativity

(02:58) Machine and AI Creativity

(06:25) Measuring Subjective Creativity

(10:07) Creativity in Animals

(13:43) Alignment Damages Creativity

(19:09) Creativity is Hallucination

(26:13) Humble Creative Machines

(30:50) Incentives and Replacement

(40:36) Analogies for the Future

(43:57) Collaborating with AI

(52:20) Reinforcement Learning & Slop

(55:59) AI in Education


SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Show more...
1 week ago
1 hour 1 minute

Future of Life Institute Podcast
From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.

LINKS:
- Parmy Olson on X (Twitter): https://x.com/parmy
- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson
- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244

PRODUCED BY:
https://aipodcast.ing

CHAPTERS:
(00:00) Episode Preview
(01:18) Introducing Parmy Olson
(02:37) Personalities Driving AI
(06:45) From Research to Products
(12:45) Has the Mission Changed?
(19:43) The Role of Regulators
(21:44) Skepticism of AI Utopia
(28:00) The Human Cost
(33:48) Embracing Controversy
(40:51) The Role of Journalism
(41:40) Big Tech's Influence

SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Show more...
3 weeks ago
46 minutes

Future of Life Institute Podcast
Can Defense in Depth Work for AI? (with Adam Gleave)

Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.


LINKS:
Adam Gleave - https://www.gleave.me
FAR.AI - https://www.far.ai
The Cognitive Revolution Podcast - https://www.cognitiverevolution.ai


PRODUCED BY:

https://aipodcast.ing


CHAPTERS:

(00:00) A Positive Post-AGI Vision
(10:07) Surviving Gradual Disempowerment
(16:34) Defining Powerful AIs
(27:02) Solving Continual Learning
(35:49) The Just-in-Time Safety Problem
(42:14) Can Defense-in-Depth Work?
(49:18) Fixing Alignment Problems
(58:03) Safer Training Formulas
(01:02:24) The Role of Interpretability
(01:09:25) FAR.AI's Vertically Integrated Approach
(01:14:14) Hiring at FAR.AI
(01:16:02) The Future of Governance


SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Show more...
1 month ago
1 hour 18 minutes

Future of Life Institute Podcast
How We Keep Humans in Control of AI (with Beatrice Erkers)

Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.

LINKS:
AI Pathways - https://ai-pathways.existentialhope.com
Beatrice Erkers - https://www.existentialhope.com/team/beatrice-erkers

CHAPTERS:
(00:00) Episode Preview
(01:10) Introduction and Background
(05:40) AI Pathways Project
(11:10) Defining Tool AI
(17:40) Tool AI Benefits
(23:10) D/acc Pathway Explained
(29:10) Decentralization Trade-offs
(35:10) Combining Both Pathways
(40:10) Uncertainties and Concerns
(45:10) Future Evolution
(01:01:21) Funding Pilots


PRODUCED BY:
https://aipodcast.ing

SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Show more...
1 month ago
1 hour 6 minutes

Future of Life Institute Podcast
Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.


LINKS:
If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com
Machine Intelligence Research Institute -  https://intelligence.org
Nate Soares - https://intelligence.org/team/nate-soares/

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) Episode Preview

(01:05) Introduction and Book Discussion

(03:34) Psychology of AI Alarmism

(07:52) Intelligence Threshold Effects

(11:38) Growing vs Crafting AI

(18:23) Illusion of AI Control

(26:45) Why Iteration Won't Work

(34:35) The No Retries Problem

(38:22) Computer Security Lessons

(49:13) The Cursed Problem

(59:32) Multiple Curses and Complications

(01:09:44) AI's Infrastructure Advantage

(01:16:26) Grading Humanity's Response

(01:22:55) Time Needed for Solutions

(01:32:07) International Ban Necessity

SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Show more...
1 month ago
1 hour 39 minutes

Future of Life Institute Podcast
Breaking the Intelligence Curse (with Luke Drago)
Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition.   "The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/ Luke's Substack: https://lukedrago.substack.com/ Workshop Labs: https://workshoplabs.ai/ CHAPTERS: (00:00) Episode Preview (00:55) Intelligence Curse Introduction (02:55) AI vs Historical Technology (07:22) Economic Metrics and Indicators (11:23) Pyramid Replacement Theory (17:28) Human Judgment and Taste (22:25) Data Privacy and Control (28:55) Dystopian Economic Scenario (35:04) Resource Curse Lessons (39:57) Culture vs Economic Forces (47:15) Open Source AI Debate (54:37) Corporate Mission Evolution (59:07) AI Alignment and Loyalty (01:05:56) Moonshots and Career Advice
Show more...
1 month ago
1 hour 9 minutes

Future of Life Institute Podcast
What Markets Tell Us About AI Timelines (with Basil Halperin)
Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. * Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf * Read more about Basil's work here: https://basilhalperin.com/ CHAPTERS: (00:00) Episode Preview (00:49) Introduction and Background (05:19) Efficient Market Hypothesis Explained (10:34) Markets and Low Probability Events (16:09) Information Diffusion on Wall Street (24:34) Stock Prices vs Interest Rates (28:47) New Goods Counter-Argument (40:41) Why Focus on Interest Rates (45:00) AI Secrecy and Market Efficiency (50:52) Short Timeline Disagreements (55:13) Wealth Concentration Effects (01:01:55) Alternative Economic Indicators (01:12:47) Benchmarks vs Economic Impact (01:25:17) Open Research Questions SOCIAL LINKS: Website: https://future-of-life-institute-podcast.aipodcast.ing [http://future-of-life-institute-podcast.aipodcast.ing/] Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP PRODUCED BY:  https://aipodcast.ing [https://aipodcast.ing/]
Show more...
2 months ago
1 hour 36 minutes

Future of Life Institute Podcast
AGI Security: How We Defend the Future (with Esben Kran)
Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.    Learn more about Esben's work at: https://blog.kran.ai   00:00 – Intro and preview  01:13 – AGI security vs traditional cybersecurity  02:36 – Rebuilding societal infrastructure for embedded security  03:33 – Sentware: adaptive, self-improving malware  04:59 – New attack surfaces  05:38 – Social media as misaligned AI  06:46 – Personal vs societal defenses  09:13 – Why private companies underinvest in security  13:01 – Security as the foundation for any AI deployment  14:15 – Oversight without a surveillance state  17:19 – Protocols for safe agent communication  20:25 – The expensive internet hypothesis  23:30 – Distributed safety for companies and governments  28:20 – Cloudflare's "agent labyrinth" example  31:08 – Positive vision for distributed security  33:49 – Human value when labor is automated  41:19 – Encoding law for machines: contracts and enforcement  44:36 – DarkBench: detecting manipulative LLM behavior  55:22 – The AGI endgame: default path vs designed future  57:37 – Powerful tool AI  01:09:55 – Fast takeoff risk  01:16:09 – Realistic optimism
Show more...
2 months ago
1 hour 18 minutes

Future of Life Institute Podcast
Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.   Follow Benjamin's work at: https://benjamintodd.substack.com   Timestamps:  00:00 What are reasoning models?   04:04 Reinforcement learning supercharges reasoning  05:06 Reasoning models vs. agents  10:04 Economic impact of automated math/code  12:14 Compute as a bottleneck  15:20 Shift from giant pre-training to post-training/agents  17:02 Three feedback loops: algorithms, chips, robots  20:33 How fast could an algorithmic loop run?  22:03 Chip design and production acceleration  23:42 Industrial/robotics loop and growth dynamics  29:52 Society's slow reaction; "warning shots"  33:03 Robotics: software and hardware bottlenecks  35:05 Scaling robot production  38:12 Robots at ~$0.20/hour?   43:13 Regulation and humans-in-the-loop  49:06 Personal prep: why it still matters  52:04 Build an information network  55:01 Save more money  58:58 Land, real estate, and scarcity in an AI world  01:02:15 Valuable skills: get close to AI, or far from it  01:06:49 Fame, relationships, citizenship  01:10:01 Redistribution, welfare, and politics under AI  01:12:04 Try to become more resilient   01:14:36 Information hygiene  01:22:16 Seven-year horizon and scaling limits by ~2030
Show more...
2 months ago
1 hour 27 minutes

Future of Life Institute Podcast
From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines.   Learn more about Calum's work here: https://calumchace.com   Timestamps:   00:00:00  Preview and intro  00:03:02  Past tech revolutions and AI-driven unemployment  00:05:43  Cognitive automation: from secretaries to every job  00:08:02  The "peak horse" analogy and avoiding human obsolescence  00:10:55  Infinite demand and lump of labor  00:18:30  Fully-automated luxury capitalism  00:23:31  Abundance economy and a potential employment cliff  00:29:37  Education reimagined with personalized AI tutors  00:36:22  Real-world uses of LLMs: memory, drafting, emotional insight  00:42:56  Meaning beyond jobs: aristocrats, retirees, and kids  00:49:51  Four futures of superintelligence  00:57:20  Conscious AI and empathy as a safety strategy  01:10:55  Verifying AI agents  01:25:20  Over-attributing vs under-attributing machine consciousness
Show more...
3 months ago
1 hour 37 minutes

Future of Life Institute Podcast
How AI Could Help Overthrow Governments (with Tom Davidson)
On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats.   Learn more about Tom's work here: https://www.forethought.org   Timestamps:   00:00:00  Preview: why preventing AI-enabled coups matters  00:01:24  What do we mean by an "AI-enabled coup"?  00:01:59  Capabilities AIs would need (persuasion, strategy, productivity)  00:02:36  Cyber-offense and the road to robotized militaries  00:05:32  Step-by-step example of an AI-enabled military coup  00:08:35  How AI-enabled coups would differ from historical coups  00:09:24  Democratic backsliding (Venezuela, Hungary, U.S. parallels)  00:12:38  Singular loyalties, secret loyalties, exclusive access  00:14:01  Secret-loyalty scenario: CEO with hidden control  00:18:10  From sleeper agents to sophisticated covert AIs  00:22:22  Exclusive-access threat: one project races ahead  00:29:03  Could one country outgrow the rest of the world?  00:40:00  Could a single company dominate global GDP?  00:47:01  Autocracies vs democracies  00:54:43  Mitigations for singular and secret loyalties  01:06:25  Guardrails, monitoring, and controlled-use APIs  01:12:38  Using AI itself to preserve checks-and-balances  01:24:53  Risk indicators to watch for AI-enabled coups  01:33:05  Tom's risk estimates for the next 5 and 30 years  01:46:50  How you can help – research, policy, and careers
Show more...
3 months ago
1 hour 53 minutes

Future of Life Institute Podcast
What Happens After Superintelligence? (with Anders Sandberg)
Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks.   Learn more about Anders's work here: https://mimircenter.org/anders-sandberg   Timestamps:   00:00:00 Preview and intro  00:04:20 2030 superintelligence scenario  00:11:55 Status, post-scarcity, and reshaping human psychology  00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks  00:23:48 Technosphere vs biosphere  00:28:42 Culture and physics as long-run drivers of civilization  00:40:38 How superintelligence could upend markets and governments  00:50:01 State inertia: why governments lag behind companies  00:59:06 Value lock-in, censorship, and model alignment  01:08:32 Emergent AI ecosystems and coordination-failure risks  01:19:34 Predictability vs reliability: designing safe systems  01:30:32 Crossing the reliability threshold  01:38:25 Personal reflections on accelerating change
Show more...
3 months ago
1 hour 44 minutes

Future of Life Institute Podcast
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory.   You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org   Timestamps:   00:00:00 Preview and intro  00:00:50 Why AI will eclipse the Industrial Revolution   00:09:48 How much can AI speed up AI research?   00:16:13 Automated coding and diffusion  00:27:37 Transparency in AI development   00:34:52 Deploying AI internally   00:40:24 Communication between AIs   00:49:23 Is AI inherently risky?  00:59:54 Iterative forecasting
Show more...
4 months ago
1 hour 10 minutes

Future of Life Institute Podcast
Preparing for an AI Economy (with Daniel Susskind)
On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI's economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.   You can learn more about Daniel's work here: https://www.danielsusskind.com   Timestamps:   00:00:00 Preview and intro   00:03:19 AI researchers versus economists   00:10:39 Measuring AI's economic effects   00:16:19 Can AI be steered in positive directions?   00:22:10 Human values and economic outcomes  00:28:21 What will remain for people to do?   00:44:58 Commercial incentives in AI  00:50:38 Will education move towards general skills?  00:58:46 Lessons for parents
Show more...
4 months ago
1 hour 3 minutes

Future of Life Institute Podcast
Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed's decision to resign from Stability AI, the industry's attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.   Learn more about Ed's work here: https://ed.newtonrex.com   Timestamps:   00:00:00 Preview and intro   00:04:18 AI-generated music   00:12:15 Resigning from Stability AI   00:16:20 AI industry attitudes towards rights  00:26:22 Fairly Trained   00:37:16 Special kinds of training data   00:50:42 The longer-term future of AI   00:56:09 Will AI improve living standards?   01:03:10 AI versions of artists   01:13:28 Authenticity and art   01:18:45 Competitive pressures in AI  01:24:06 Priorities going forward
Show more...
4 months ago
1 hour 27 minutes

Future of Life Institute Podcast
AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI's development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies' vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines.   Timestamps:   00:00:00 Preview and intro 00:00:46 What do benchmarks measure?   00:08:08 Will AI develop like other tech?   00:14:13 Which tasks can AIs do?  00:23:00 Capability profiles of AIs   00:34:04 Timelines and social effects  00:42:01 Alignment by default?   00:50:36 Can vague AGI plans be useful?  00:54:36 The fast world and the slow world  01:08:02 Long-term projects and short timelines
Show more...
4 months ago
1 hour 15 minutes

Future of Life Institute Podcast
Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism. Timestamps:   00:00:00 Preview and intro  00:01:05 Understanding is dual-use   00:05:17 Can we handle AI like other tech?   00:12:08 Can institutions adapt to AI?   00:16:50 Recognizing signs of dangerous AI  00:22:45 Agents versus tools  00:25:43 Power is latent in the world  00:35:45 Widespread powerful hardware  00:42:09 Governance mechanisms for AI  00:53:55 Deep atheism and optimistic cosmism
Show more...
5 months ago
1 hour 1 minute

Future of Life Institute Podcast
Facing Superintelligence (with Ben Goertzel)
On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.    Timestamps:   00:00:00 Preview and intro   00:01:59 Thinking about AGI in the 1970s   00:07:28 What's different about this AI boom?   00:16:10 Former taboos about AGI  00:19:53 AI research worth revisiting   00:35:53 Will the first AGI be simple?   00:48:49 Is alignment achievable?   01:02:40 Benchmarks and economic impact   01:15:23 Bottlenecks to superintelligence  01:23:09 What should we do?
Show more...
5 months ago
1 hour 32 minutes

Future of Life Institute Podcast
Will Future AIs Be Conscious? (with Jeff Sebo)
On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.   You can follow Jeff's work here: https://jeffsebo.net/   Timestamps:   00:00:00 Preview and intro  00:02:56 Imagining artificial consciousness   00:07:51 Substrate-independence?  00:11:26 Are we making progress?   00:18:03 Intuitions about explanations   00:24:43 AI risk and AI consciousness   00:40:01 Consciousness and cognitive complexity   00:51:20 Intuition versus intellect  00:58:48 AIs as companions   01:05:24 AI rights   01:13:00 Acting under time pressure  01:20:16 Measuring consciousness   01:32:11 How can you help?
Show more...
5 months ago
1 hour 34 minutes

Future of Life Institute Podcast
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity's uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI's growing influence in financial trading.   You can follow Zvi's excellent blog here: https://thezvi.substack.com   Timestamps:   00:00:00 Preview and introduction   00:02:01 Sycophantic AIs   00:07:28 Bottlenecks for AI agents   00:21:26 Are benchmarks useful?   00:32:39 AI agent time horizons   00:44:18 Impact of automating research  00:53:00 Limits to scaling inference compute   01:02:51 Will the future go well for humanity?   01:12:22 A good plan for safe AI   01:26:03 What makes AI different?   01:31:29 AI in trading
Show more...
6 months ago
1 hour 35 minutes

Future of Life Institute Podcast
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.