So, after months watching the ongoing regulatory drama play out, today I’m diving straight into how the European Union’s Artificial Intelligence Act—yes, the EU AI Act, Regulation (EU) 2024/1689—is reshaping the foundations of AI development, deployment, and even day-to-day business, not just in Europe but globally. Since it entered into force back on August 1, 2024, we’ve already seen the first two waves of its sweeping risk-based requirements crash onto the digital shores. First, in February 2025, the Act’s notorious prohibitions and those much-debated AI literacy requirements kicked in. That means, for the first time ever, it’s now illegal across the EU to put into practice AI systems designed to manipulate human behavior, do social scoring, or run real-time biometric surveillance in public—unless you’re law enforcement and you have an extremely narrow legal rationale. The massive fines—up to €35 million or 7 percent of annual turnover—have certainly gotten everyone’s attention, from Parisian startups to Palo Alto’s megafirms.
Now, since August, the big change is for providers of general-purpose AI models. Think OpenAI, DeepMind, or their European challengers. They now have to maintain technical documentation, publish summaries of their training data, and comply strictly with copyright law—according to the European Commission’s July guidelines and the new GPAI Code of Practice. Particularly for “systemic risk” models—those so foundational and widely used that a failure or misuse could ripple dangerously across industries—they must proactively assess and mitigate those very risks. To help with all that, the EU introduced the Apply AI Strategy in September, which goes hand-in-hand with the launch of RAISE, the new virtual institute opening this month. RAISE is aiming to democratize access to the computational heavy lifting needed for large-model research, something tech researchers across Berlin and Barcelona are cautiously optimistic about.
But it’s the incident reporting that’s causing all the recent buzz—and a bit of panic. Since late September, with Article 73’s draft guidance live, any provider or deployer of high-risk AI has to be ready to report “serious incidents”—not theoretical risks—like actual harm to people, major infrastructure disruption, or environmental damage. Ireland, characteristically poised at the tech frontier, just set up a National AI Implementation Committee with its own office due next summer, but there’s controversy brewing about how member states might interpret and enforce compliance differently. Brussels is pushing harmonization, but the federated governance across the EU is already introducing gray zones.
If you’re involved with AI on any level, it’s almost impossible to ignore how the EU’s risk-based, layered obligations—and the very real compliance deadlines—are forcing a global recalibration. Whether you see it as stifling or forward-thinking, the world is watching as Europe attempts to bake fundamental rights, safety, and transparency into the very core of machine intelligence. Thanks for tuning in—remember to subscribe for more on the future of technology, policy, and society. This has been a quiet please production, for more check out quiet please dot ai.
Some great Deals
https://amzn.to/49SJ3QsFor more check out
http://www.quietplease.aiThis content was created in partnership and with the help of Artificial Intelligence AI