Artificial Intelligence in Europe: The Trailblazing EU AI Act
Publication | 05.14.24
On Dec. 8, 2023, after lengthy and intense negotiations, European legislators reached a political agreement on the EU Artificial Intelligence Act (AI Act). The EU Parliament formally adopted its position during its plenary session of March 13, 2024, and after legal-linguistic finalization and formal adoption by the EU Council, the AI Act is expected to be published in the Official Journal of the EU before the end of Q2, 2024.
Initially proposed by the European Commission in April 2021, the AI Act positions the EU as a trailblazer in regulating artificial intelligence, as it is the first significant, all-encompassing regulation in the world focused on the development and use of AI. It establishes a uniform legal framework across the EU, with the explicit goals of ensuring that AI used in the European market is legal, safe, and trustworthy. Given its extraterritorial scope of application, certain rules will extend beyond the EU’s borders and have, thus, a global impact.
While the AI Act doesn’t regulate technology and is in that sense technology-neutral, it sets rules for the development and use of AI in specific cases. The EU legislators have adopted a risk-based approach: AI systems posing minimal-to-no risk do not face restrictions; for limited-risk AI systems, there are some specific transparency obligations, such as the need to mark generated output as artificially generated or manipulated; heavily regulated; “high-risk” AI systems, will carry a more significant regulatory burden; and those considered to pose an unacceptable risk for the health, safety, and fundamental rights of individuals, such as AI systems that manipulate human behavior to circumvent free will, will be banned.
Organizations will need to conduct a thorough mapping of all AI systems to assess whether obligations apply. In doing so, organizations can build on much of the data mapping work that should have been done for GDPR compliance. Consequently, privacy professionals will play a pivotal role in compliance efforts related to AI.
A significant point of contention throughout the interinstitutional legislative process concerned general-purpose AI. While in the past, AI-based applications were designed for specific tasks, recent years have seen the development of AI systems that can be employed for a wide array of tasks (including those previously unforeseen) with minimal modifications, thus serving a general purpose. This has led to the creation of “general-purpose AI models,” which serve as the basis for a multitude of different applications. This type of development presents a so-called “single point of failure” risk: if there’s a flaw in the model, it can affect all downstream applications built on it. The AI Act imposes specific obligations for providers of general-purpose AI models, and stricter obligations will apply to general-purpose AI models with systemic risk – classified as such if they have (i) high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks, or (ii) capabilities or an impact equivalent to those set out in (i) regarding specific defined criteria, based on a decision of the EU Commission, ex officio or following a qualified alert from the scientific panel.
The EU Commission and supervisory bodies to be created within the EU Member States will play a key role in the enforcement of the AI Act’s provisions.