EU Artificial Intelligence Act
Publication | 01.28.25
On June 13, 2024, the European Union (EU) adopted the Artificial Intelligence Act (EU AI Act), making it the first-ever global law to regulate the use of artificial intelligence in a broad and horizontal manner. The historic measure applies to the development, deployment, and use of AI in the EU. Importantly, the EU AI Act has a certain “extra-territorial” effect to the extent that it is applicable to providers placing AI systems on the market in the EU, even if these providers are established outside the EU.
The EU AI Act entered into force on August 1, 2024, but its provisions will start applying gradually, starting from 2025. Below is a summary of the obligations.
set to become applicable in 2025, setting aside others that will go into effect on August 2, 2026. The provisions relating to high-risk AI systems within the product safety regulation regime outlined in Annex I will become applicable on August 2, 2027.
Obligations Effective in 2025
The year 2025 marks significant milestones for the EU AI Act, with critical dates in February and August that introduce new regulatory requirements.
Starting on February 2, 2025, the EU AI Act will apply to AI systems deemed prohibited due to their significant risk to the fundamental rights of EU citizens. Such AI systems include those designed for behavioral manipulation, social scoring by public authorities, and real-time remote biometric identification for law enforcement purposes. These systems will be banned outright to protect citizens’ rights and freedoms.
By August 2, 2025, providers of General-Purpose AI Models (GPAI models), including Large Language Models (LLMs), will face new obligations. A general- purpose AI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market” and it may serve as a basis for a “general-purpose AI system,” which in turn has “the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”
GPAI models with systemic risk have “high impact capabilities” or when the European Commission has designated it as such.
Obligations for Providers of GPAI Models
Providers of GPAI models must comply with several obligations, including:
- Drawing up technical documentation;
- Providing information to providers of AI systems in which the GPAI model is integrated;
- Putting in place a policy to comply with EU copyright laws (in particular the “opt-out” provisions of the general text and data mining- exception to ensure that they have lawful access to copyrighted content and comply with rights reservations);
- A sufficiently detailed summary about the content used for training the GPAI model;
- Cooperating with the EU Commission; and
- Relying on codes of practice to demonstrate compliance (until harmonized EU standards are published).
By the start of May 2025, the AI Office, a newly established entity within the European Commission, is expected to have released the code of practice for GPAI models. This document will clarify the practical application of the rules for providers.
Obligations for Providers of GPAI Models with Systemic Risk
In addition to the general obligations, providers of GPAI models identified as having systemic risks must:
- Perform model evaluation and identify, assess, and mitigate systemic risk;
- Track, document, remedy, and report serious incidents to the AI office/ national competent authorities, responsible for the enforcement of the EU AI Act in their Member State; and
- Ensure an adequate level of cybersecurity
Non-Compliance Penalties
The EU AI Act outlines substantial administrative fines for non-compliance, which can reach up to 7 percent of a company’s global annual turnover, or 35 million EUR (36 million).
As we move towards the implementation and enforcement of the EU AI Act in 2025, AI providers and deployers should consider familiarizing themselves with these obligations to ensure compliance and to avoid significant penalties.
Contacts
Insights
Publication | 01.28.25
A Changing Tech and Legal Landscape in Corporate
Whether it is personal, customer, training or other data, one thing is clear: data continues to be an important currency and revenue driver for companies. Rapidly changing technology, coupled with developing regulations, requires companies that use or disclose data to be extremely vigilant to stay current. Today, companies struggle to keep up with seemingly nonstop changes to state-level law. These struggles are exacerbated by quickly developing regulations and regimes overseas— creating challenges for international data transfers and international transactions. To optimize the value of their data into 2025 and beyond, companies should consider addressing these challenges with a new focus and additional precision in their commercial agreements.
Publication | 01.28.25
Publication | 01.28.25
Publication | 01.28.25