Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.”
Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.
The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.
The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C.
Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI.
They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs.
Leo Wu provided excellent research assistance on this podcast.
Read more from Mosharaf:
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Read more from Dan:
https://arxiv.org/abs/2310.03003’
https://arxiv.org/abs/2301.11581
Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.
They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.
You’ll “like” (bad pun intended) this one.
Leo Wu provided excellent research assistance to prepare for this podcast.
Read more from David:
https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/
https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/
Read more from Ravi:
Read more from Kevin:
https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.
The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.
Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower
Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education.
Select works by Gans include:
A Quest for AI Knowledge (https://www.nber.org/papers/w33566)
Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)
How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105)
Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU’s use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US’s increasingly interventionist industrial policy with respect to key sectors, especially tech.
Read more:
Anu’s op-ed in The New York Times
The Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van Reenen
Draghi Report on the Future of European Competitiveness
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration’s prior calls for greater use of AI from K to 12 and beyond.
Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-school
Learn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/
Hosted on Acast. See acast.com/privacy for more information.
Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions.
The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts.
Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, sat down with Sam Winter-Levy, a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Janet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security; and Peter Harrell, a nonresident fellow at Carnegie and a former senior director for international economics at the White House National Security Council under President Joe Biden.
They discussed the Trump administration’s recent decision to allow U.S. companies Nvidia and AMD to export a range of advanced AI semiconductors to China in exchange for a 15% payment to the U.S. government. They talked about the history of the export control regime targeting China’s access to AI chips, the strategic risks of allowing China to acquire powerful chips like the Nvidia H20, and the potential harm to the international coalition that has worked to restrict China’s access to this technology. They also debated the statutory and constitutional legality of the deal, which appears to function as an export tax, a practice explicitly prohibited by the Constitution.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.