
Enjoying the show? Support our mission and help keep the content coming by buying us a coffee.
The United States is undergoing a legislative flood: in 2025 alone, 38 states enacted ≈100 concrete legislative measures on Artificial Intelligence. This isn't philosophical debate; these are laws impacting liability, compliance, and the future definition of work. We synthesize this massive legislative wave to give you the blueprint of where the tidal wave of AI regulation is hitting the private sector.
Beneath the technical mandates, states are preemptively grappling with the future legal status of advanced AI:
Defining Non-Personhood: States like Oregon (H 2748) and others are drawing a hard line, explicitly prohibiting a non-human entity (including AI agents) from using licensed professional titles like Registered Nurse. This reinforces that accountability and licensure must rest with a human professional.
Preemptive Exclusion: States like Tennessee are proactively defining foundational legal terms like "human being" and "natural person" to explicitly exclude AI and algorithms. This shows a preemptive concern about the future legal status of advanced intelligence, ensuring that the machine cannot claim the rights or status of a person.
Legislators moved quickly to criminalize AI capabilities that cause immediate, widespread societal harm:
Closing the CSAM Loophole: States (including Kansas, Minnesota, and Texas) redefined child pornography to explicitly include AI or computer-generated material that is "indistinguishable from an actual minor," effectively closing the defense argument that "no child was harmed in making this."
Digital Replication Rights: States like Arkansas and Utah created or updated publicity rights to cover digital likenesses, essentially treating your voice, face, and movements as a defensible property right against unauthorized commercial use by AI without permission.
Election Integrity: The goal is mandatory transparency. South Dakota and North Dakota prohibit using deepfakes to influence an election unless a prominent, specific disclaimer is included, backed by criminal penalties (e.g., a Class 1 misdemeanor).
Regulation is targeting high-impact sectors where algorithms are replacing human judgment, ensuring human interests remain paramount:
Housing (Digital Price Fixing): States (including New York, Illinois, and New Hampshire) are trying to prevent digital price-fixing cartels. They are regulating algorithms trained on or fed nonpublic competitor data that could effectively act as a central coordinator, achieving collusion without explicit agreement. New York is even targeting the software providers themselves with liability.
Healthcare (Claims Denial): To protect patients from cost-saving algorithms, states (like Connecticut and Indiana) are strengthening utilization review laws. They are creating a rebuttable presumption that a service ordered by a doctor is medically necessary, placing the burden on the insurer's algorithm to prove otherwise.
Workforce Displacement: Legislation in New York is proposing a tax/surcharge on corporations for employees displaced by AI to fund transition programs. Meanwhile, states like Iowa are countering with a focus on re-skilling, funding STEM and AI education expansion to prepare the workforce proactively.
Before regulating the private sector, states are focusing on getting their own house in order and managing AI's massive energy footprint:
Government Transparency: States like Kentucky are mandating clear and conspicuous disclaimers when AI is used to make decisions about citizen benefits or services, ensuring the citizen knows when an algorithm was involved. Utah and California are mandating human oversight and certification for AI-assisted police reports.
The most profound insight is that while grappling with technical regulation, US states are preemptively deciding the future legal status of artificial intelligence.