Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.
LINKS:
- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman
- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/
PRODUCED BY:
CHAPTERS:
(00:00) Episode Preview
(01:00) Defining Human Creativity
(02:58) Machine and AI Creativity
(06:25) Measuring Subjective Creativity
(10:07) Creativity in Animals
(13:43) Alignment Damages Creativity
(19:09) Creativity is Hallucination
(26:13) Humble Creative Machines
(30:50) Incentives and Replacement
(40:36) Analogies for the Future
(43:57) Collaborating with AI
(52:20) Reinforcement Learning & Slop
(55:59) AI in Education
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.
LINKS:
- Parmy Olson on X (Twitter): https://x.com/parmy
- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson
- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) Episode Preview
(01:18) Introducing Parmy Olson
(02:37) Personalities Driving AI
(06:45) From Research to Products
(12:45) Has the Mission Changed?
(19:43) The Role of Regulators
(21:44) Skepticism of AI Utopia
(28:00) The Human Cost
(33:48) Embracing Controversy
(40:51) The Role of Journalism
(41:40) Big Tech's Influence
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.
LINKS:
Adam Gleave - https://www.gleave.me
FAR.AI - https://www.far.ai
The Cognitive Revolution Podcast - https://www.cognitiverevolution.ai
PRODUCED BY:
CHAPTERS:
(00:00) A Positive Post-AGI Vision
(10:07) Surviving Gradual Disempowerment
(16:34) Defining Powerful AIs
(27:02) Solving Continual Learning
(35:49) The Just-in-Time Safety Problem
(42:14) Can Defense-in-Depth Work?
(49:18) Fixing Alignment Problems
(58:03) Safer Training Formulas
(01:02:24) The Role of Interpretability
(01:09:25) FAR.AI's Vertically Integrated Approach
(01:14:14) Hiring at FAR.AI
(01:16:02) The Future of Governance
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.
LINKS:
AI Pathways - https://ai-pathways.existentialhope.com
Beatrice Erkers - https://www.existentialhope.com/team/beatrice-erkers
CHAPTERS:
(00:00) Episode Preview
(01:10) Introduction and Background
(05:40) AI Pathways Project
(11:10) Defining Tool AI
(17:40) Tool AI Benefits
(23:10) D/acc Pathway Explained
(29:10) Decentralization Trade-offs
(35:10) Combining Both Pathways
(40:10) Uncertainties and Concerns
(45:10) Future Evolution
(01:01:21) Funding Pilots
PRODUCED BY:
https://aipodcast.ing
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.
LINKS:
If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com
Machine Intelligence Research Institute - https://intelligence.org
Nate Soares - https://intelligence.org/team/nate-soares/
PRODUCED BY:
CHAPTERS:
(00:00) Episode Preview
(01:05) Introduction and Book Discussion
(03:34) Psychology of AI Alarmism
(07:52) Intelligence Threshold Effects
(11:38) Growing vs Crafting AI
(18:23) Illusion of AI Control
(26:45) Why Iteration Won't Work
(34:35) The No Retries Problem
(38:22) Computer Security Lessons
(49:13) The Cursed Problem
(59:32) Multiple Curses and Complications
(01:09:44) AI's Infrastructure Advantage
(01:16:26) Grading Humanity's Response
(01:22:55) Time Needed for Solutions
(01:32:07) International Ban Necessity
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP