The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
All content for Center for AI Policy Podcast is the property of Center for AI Policy and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
#10: Stephen Casper on Technical and Sociotechnical AI Safety Research
Center for AI Policy Podcast
59 minutes 46 seconds
1 year ago
#10: Stephen Casper on Technical and Sociotechnical AI Safety Research
Stephen Casper, a computer science PhD student at MIT, joined the podcast to discuss AI interpretability, red-teaming and robustness, evaluations and audits, reinforcement learning from human feedback, Goodhart’s law, and more.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
For a transcript and relevant links, visit the Center for AI Policy Podcast Substack.
Center for AI Policy Podcast
The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy.
This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.