The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
All content for Center for AI Policy Podcast is the property of Center for AI Policy and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
#12: Michael K. Cohen on Regulating Advanced Artificial Agents
Center for AI Policy Podcast
43 minutes 50 seconds
1 year ago
#12: Michael K. Cohen on Regulating Advanced Artificial Agents
Dr. Michael K. Cohen, a postdoc AI safety researcher at UC Berkeley, joined the podcast to discuss OpenAI's superalignment research, reinforcement learning and imitation learning, potential dangers of advanced future AI agents, policy proposals to address long-term planning agents, academic discourse on AI risks, California's SB 1047 bill, and more.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
For a transcript and relevant links, visit the Center for AI Policy Podcast Substack.
Center for AI Policy Podcast
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.