
Deep Dive into the European Union is taking a proactive approach to regulating artificial intelligence (AI) with the AI Act, which comes into full effect on February 2, 2025. The Act focuses heavily on high-risk AI systems, those with potential to impact safety, fundamental rights, or societal well-being. Providers of these systems are obligated to implement robust risk management, ensure transparency and human oversight, and maintain data quality. They must also undergo conformity assessments before placing their AI systems on the market and conduct ongoing post-market monitoring. However, the Act doesn't just focus on high-risk systems. It also introduces specific obligations for providers of general-purpose AI models, like large language models. These providers are required to maintain detailed technical documentation, provide transparency information to downstream providers, and implement measures to mitigate potential systemic risks. The Act further establishes a system for identifying and registering AI systems, with stricter requirements for those used in sensitive areas like law enforcement. While the Act's requirements are stringent, the EU emphasizes a collaborative approach, actively engaging in stakeholder consultations and providing guidelines to ensure clarity and consistency in implementation.
[Source]
https://eur-lex.europa.eu/eli/reg/2024/1689/oj