The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
All content for Center for AI Policy Podcast is the property of Center for AI Policy and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
#16: Gabe Alfour on Competing Beliefs About Superintelligence
Center for AI Policy Podcast
1 hour 20 minutes 55 seconds
7 months ago
#16: Gabe Alfour on Competing Beliefs About Superintelligence
Gabe Alfour, Chief Technology Officer at Conjecture, joined the podcast to discuss superintelligence, AI-accelerated science, the limits of technology, different perspectives on the future of AI, loss of control risks, AI racing, international treaties, and more.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
For a transcript and relevant links, visit the Center for AI Policy Podcast Substack.
Center for AI Policy Podcast
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.