The Lawfare Podcast features discussions with experts, policymakers, and opinion leaders at the nexus of national security, law, and policy. On issues from foreign policy, homeland security, intelligence, and cybersecurity to governance and law, we have doubled down on seriousness at a time when others are running away from it. Visit us at www.lawfareblog.com.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
The Lawfare Podcast features discussions with experts, policymakers, and opinion leaders at the nexus of national security, law, and policy. On issues from foreign policy, homeland security, intelligence, and cybersecurity to governance and law, we have doubled down on seriousness at a time when others are running away from it. Visit us at www.lawfareblog.com.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.

Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.