On January 22, 2024, the pre-final text of the European Union’s Artificial Intelligence Act (“EU AI Act”) was leaked as an 892-page table comparing different mandates of the EU AI Act, followed by a 258-page document setting out the consolidated text. Afterwards, the final text of the AI act was endorsed by all 27 EU Member States on February 2nd.
This act is set to be the first comprehensive set of global regulations governing the use of Artificial Intelligence (AI), aiming to ensure that “AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values“.
This marks a significant step towards regulating AI development and deployment within Europe. This blog post dissects the key take aways of the Act, highlighting what’s new and comparing it to previous drafts.
Key Take Aways:
Scope and Definitions
The Act retains the broad approach of earlier drafts, regulating AI systems across the EU based on a definition of AI systems that is aligned with the OECD definition, and provides details on the addressees of the EU AI Act, as well as the obligations for providers and deployers of AI systems.
However, it clarifies exclusions for military AI, scientific research, and open-source systems. The definition of AI systems remains largely unchanged, emphasizing their autonomy, adaptiveness, and ability to influence environments. A new addition is the definition of “GPAI models” (general-purpose AI models) capable of performing diverse tasks.
Prohibited AI Systems and Risk Based Approach
The ban on biometric identification systems for general use and restriction on real time remote biometric identification remain. Additionally, the Act prohibits AI systems exploiting vulnerabilities based on age, disability, or social/economic status.
The risk-based approach to high-risk AI systems persists. However, the classification mechanism now combines references to abstract definitions and specific listings in Annexes II and III of the Act.
General Purpose AI Models (GPAI Models)
A dedicated section addresses GPAI models with “systemic risk” based on high impact or exceeding a computational power threshold where providers face obligations like documentation, cooperation, and incident reporting.
A GPAI model is defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the EU market and that can be integrated into a variety of downstream systems or applications” (Art. 3(1)(44b) EU AI Act).
The classification of GPAI models with systemic risk is addressed in Art. 52a EU AI Act. A GPAI model is considered to pose a systemic risk if it has high impact capabilities or is identified as such by the Commission. A GPAI model is presumed to have high impact capabilities if the amount of computational power, measured in floating point operations (FLOPs), is greater than 10^25.
The relevant provider of GPAI is required to notify the Commission without delay, and in any event within two weeks, after those requirements are met or once it knows that they will be met. A list of AI models with systemic risk will be published and frequently updated by the Commission, without prejudice to the need to respect and protect intellectual property rights and confidential commercial information or business secrets.
Deep Fakes
The definition of “deep fakes” as manipulated content resembling real entities or events remains unchanged.
However, the text sets out transparency obligations for providers and deployers of certain AI systems and GPAI models that are stricter than some of the previous drafts of the EU AI Act. These obligations include disclosure obligations for deployers of deep fakes subject to exceptions where the use is authorized by law to detect, prevent, investigate and prosecute criminal offenses. Where the content forms part of an evidently artistic work, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in a way that does not hamper the display or enjoyment of the work (Art. 52(3) EU AI Act).
Penalties (Art. 71 EU AI Act)
The penalty structure remains similar, with maximum fines adjusted slightly. A dedicated penalty regime exists for GPAI models, emphasizing responsibility of providers.
Maximum Penalty | Older drafts | Pre-final text |
General violations | €30 million or 5% of annual turnover | €35 million or 7% of annual turnover |
GPAI violations | €15 million or 3% of annual turnover |
Next Steps
The pre-final text was endorsed by all 27 member states on 2 February. The torch is now passed to the European Parliament for adoption of the pre-final text, followed by a plenary vote provisionally scheduled for 10th-11th of April.
End note
The EU AI Act will certainly not be the end of developing AI regulation in Europe. AI is evolving very quickly, and legislation will always struggle to keep up.
There are a lot of opportunities for companies in adopting and using AI, but the risks AI poses to a company’s reputation are also great. It is thus important to comply with current legislation, but also to keep a sustainable use of AI in mind, so that if legislation changes, it will still be carried out in a legal manner. CRANIUM can be your partner for this, guiding you through the changing legislation.