Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/dc/53/44/dc5344a5-fa60-8b04-a282-8c18dd4e0ad5/mza_16125141423964533799.jpeg/600x600bb.jpg
Scaling Laws
Lawfare & University of Texas Law School
181 episodes
4 days ago
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Government
News,
Politics,
Tech News
RSS
All content for Scaling Laws is the property of Lawfare & University of Texas Law School and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Government
News,
Politics,
Tech News
Episodes (20/181)
Scaling Laws
Anthropic's Gabriel Nicholas Analyzes AI Agents

Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.”











Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 days ago
48 minutes 50 seconds

Scaling Laws
The GoLaxy Revelations: China's AI-Driven Influence Operations, with Brett Goldstein, Brett Benson, and Renée DiResta

Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.


The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.


Mentioned in this episode:


  • "The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025)
  • "China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025)
  • "The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025)
  • "The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020)



Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 week ago
55 minutes 26 seconds

Scaling Laws
Sen. Scott Wiener on California Senate Bill 53

California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.

The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 weeks ago
49 minutes 6 seconds

Scaling Laws
AI and Energy: What do we know? What are we learning?

Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. 

 

They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs. 

 

Leo Wu provided excellent research assistance on this podcast.

 

Read more from Mosharaf:

https://ml.energy/ 

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

 

Read more from Dan:

https://arxiv.org/abs/2310.03003’

https://arxiv.org/abs/2301.11581


Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 weeks ago
51 minutes 32 seconds

Scaling Laws
AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan

David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.  

 

They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.

 

You’ll “like” (bad pun intended) this one.

 

Leo Wu provided excellent research assistance to prepare for this podcast.

 

Read more from David:

https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/

https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/

 

Read more from Ravi:

https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-design

https://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false 

 

Read more from Kevin:

https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach


Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
46 minutes 40 seconds

Scaling Laws
Rapid Response: California Governor Newsom Signs SB-53
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
36 minutes 22 seconds

Scaling Laws
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).

Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.

 

The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.


Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower

Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/


Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
42 minutes 33 seconds

Scaling Laws
AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers
Alan Rozenshtein, Renee DiResta, and Jess Miers discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health. They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
58 minutes 50 seconds

Scaling Laws
AI Copyright Lawsuits with Pam Samuelson

On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate.

 

Mentioned in this episode:

  • "How to Think About Remedies in the Generative AI Copyright Cases"
  • by Pam Samuelson in Lawfare
  • Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith
  • Bartz v. Anthropic
  • Kadrey v. Meta Platforms
  • Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.
  • U.S. Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training



Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
59 minutes 5 seconds

Scaling Laws
AI and the Future of Work: Joshua Gans on Navigating Job Displacement

Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education.


Select works by Gans include:

A Quest for AI Knowledge (https://www.nber.org/papers/w33566)

Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)

How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105)


Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
57 minutes 56 seconds

Scaling Laws
The State of AI Safety with Steven Adler

Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

 

You can read Steven’s Substack here: https://stevenadler.substack.com/

 

Thanks to Leo Wu for research assistance!


Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
47 minutes 23 seconds

Scaling Laws
Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US

Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU’s use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US’s increasingly interventionist industrial policy with respect to key sectors, especially tech.

 

Read more:

Anu’s op-ed in The New York Times

The Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van Reenen

Draghi Report on the Future of European Competitiveness


Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
46 minutes 15 seconds

Scaling Laws
Uncle Sam Buys In: Examining the Intel Deal
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House’s announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
47 minutes 34 seconds

Scaling Laws
AI in the Classroom with MacKenzie Price, Alpha School co-founder, and Rebecca Winthrop, leader of the Brookings Global Task Force on AI in Education

MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration’s prior calls for greater use of AI from K to 12 and beyond.

 

Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-school


Learn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/


Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
1 hour 20 minutes 43 seconds

Scaling Laws
The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBride

Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions.


The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
45 minutes 17 seconds

Scaling Laws
Export Controls: Janet Egan, Sam Winter-Levy, and Peter Harrell on the White House's Semiconductor Decision

Alan Rozenshtein, research director at Lawfare, sat down with Sam Winter-Levy, a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Janet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security; and Peter Harrell, a nonresident fellow at Carnegie and a former senior director for international economics at the White House National Security Council under President Joe Biden.

They discussed the Trump administration’s recent decision to allow U.S. companies Nvidia and AMD to export a range of advanced AI semiconductors to China in exchange for a 15% payment to the U.S. government. They talked about the history of the export control regime targeting China’s access to AI chips, the strategic risks of allowing China to acquire powerful chips like the Nvidia H20, and the potential harm to the international coalition that has worked to restrict China’s access to this technology. They also debated the statutory and constitutional legality of the deal, which appears to function as an export tax, a practice explicitly prohibited by the Constitution.

Mentioned in this episode:

  • The Financial Times article breaking the news about the Nvidia deal
  • The Trump Administration’s AI Action Plan

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
53 minutes 37 seconds

Scaling Laws
Navigating AI Policy: Dean Ball on Insights from the White House
Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving political landscape surrounding AI on the right. Dean shares his unique perspective on the opportunities and hurdles in shaping AI's future, offering a candid look at the intersection of technology, policy, and politics. Tune in for a thought-provoking discussion that explores the strategic steps America can take to lead in the AI era.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
58 minutes 11 seconds

Scaling Laws
The Legal Maze of AI Liability: Anat Lior on Bridging Law and Emerging Tech
In this episode, we talk about the intricate world of AI liability through the lens of agency law. Join us as Anat Lior explores the compelling case for using agency law to address the legal challenges posed by AI agents. Discover how analogies, such as principal-agent relationships, can help navigate the complexities of AI liability, and why it's crucial to ensure that someone is held accountable when AI systems cause harm. Tune in for a thought-provoking discussion on the future of AI governance and the evolving landscape of legal responsibility.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
46 minutes 41 seconds

Scaling Laws
Values in AI: Safety, Ethics, and Innovation with OpenAI's Brian Fuller
Brian Fuller, product policy leader at OpenAI joins Kevin on the challenges of designing policies that ensure AI technologies are safe, aligned, and socially beneficial. From the fast-paced landscape of AI development to the balancing of innovation with ethical responsibility. Tune in to gain insights into the frameworks that guide AI's integration into society and the critical questions that shape its future.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
50 minutes 28 seconds

Scaling Laws
Because of Woke: Renée DiResta and Alan Rozenshtein on the ‘Woke AI’ Executive Order
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration’s Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue objective truth and espouse "DEI" values.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
46 minutes 48 seconds

Scaling Laws
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news.

Hosted on Acast. See acast.com/privacy for more information.