Blogpost

The AI Act’s New Guidelines on General-Purpose AI Models (GPAI)

GPAI models
Table of Contents

The European Commission’s July 2025 Guidelines on the Scope of the Obligations for General-Purpose AI Models (GPAI) mark the next interpretative step in implementing Chapter V of the AI Act. These guidelines clarify when a model becomes “general-purpose” and under what conditions it poses systemic risk, a new regulatory category which applies to all GPAI from August 2025.

For models placed on the market before 2 Aug 2025, the AI Office recognises practical challenges and supports steps to reach full compliance by 2 Aug 2027.

In this blogpost, we will discuss the implications and takeaways of these guidelines.

Context

The AI Act (Regulation 2024/1689), effective since August 2024, created a layered framework for AI regulation, covering everything from high-risk AI systems (like recruitment or biometric systems) to the powerful models that underpin them. The GPAI Guidelines were published to interpret Chapter V, which introduces rules specific to AI models.

Their purpose is twofold:

  1. To help AI model developers determine whether their model qualifies as a General-Purpose AI (GPAI); and
  2. To clarify when such a model crosses the threshold into a “systemic risk” GPAI, triggering additional obligations.

The guidelines provide the Commission’s enforcement interpretation, so while they are not legally binding, they will guide compliance checks and investigations by the AI Office.

AI Model vs. AI System

AI Models: the brains

An AI model is the mathematical engine that makes predictions, generates content, or recognises patterns. Technically, it’s the result of training an algorithm on data, so it consists of the model architecture (the design), parameters (the values learned during training), and weights (the actual numbers that govern how inputs are transformed into outputs).

A model can be seen as the core intelligence, a bit like the engine in a car or the brain of a robot. By itself, it can perform computations and reasoning, but it doesn’t interact directly with users or the world.

AI Systems: the bodies

An AI system, on the other hand, is what happens when you take that model and wrap it in software and infrastructure that interacts with people or environments. It is the application layer that connects the model’s intelligence to a real-world function, through interfaces, data pipelines, APIs, and decision-making processes.

Article 3(1) AI Act can be summarised as follows:

A machine-based system that, for explicit or implicit objectives, infers from input how to generate outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.

AI Model as a part of AI System

So, a chatbot using GPT-4 is an AI system.

GPT-4 itself, the trained large language model, is the AI model embedded inside that system.

How models and systems interact

The relationship between the two is hierarchical but intertwined:

  • Upstream: AI models are developed first, often by foundational model providers (like OpenAI, Anthropic, or Mistral). These can be general-purpose engines capable of a wide variety of tasks.
  • Midstream: Developers or companies integrate these models into applications (the AI systems), fine-tuning them, adding data pipelines, guardrails, or user interfaces.
  • Downstream: The resulting AI systems are then placed on the market, targeting specific uses (customer service, recruitment, image generation, etc.).

The AI Act regulates both, but in different ways:

  •  
  • AI models fall under Chapter V (including all the obligations clarified in the GPAI Guidelines).
  • AI systems fall under Chapters II–IV, which classify them by risk (prohibited, high-risk, transparency risk, minimal risk).

Related Solution

See how our AI Officer combines legal insight and innovation to turn compliance into value.

When does an AI Model become ‘general purpose’

Under Article 3(63) AI Act, a general-purpose AI model is defined as one trained with large-scale data (often self-supervised) that demonstrates “significant generality” and can competently perform a wide range of distinct tasks, across varied contexts and downstream applications.

These models form the foundation for many AI systems, from chatbots to multimodal image generators, and their broad utility means they can propagate risks across countless use cases. As such, the AI Act imposes transparency, documentation, and copyright compliance duties on GPAI providers (Article 53), alongside additional measures for those classified as systemic risk models (Article 55).

Becoming a GPAI

The Guidelines, in combination with the AI Act, provide a practical, compute-based threshold for identifying GPAI models:

A model is considered a GPAI if it was trained using more than 10²³ floating-point operations (FLOP) and is capable of generating language (text or audio), text-to-image, or text-to-video outputs.

This reflects the typical scale of training for models with at least one billion parameters. In practice, large language models, multimodal text-to-image generators, and similar architectures generally meet this threshold.

However, context matters:

A model narrowly specialised for one task (e.g. weather simulation or speech transcription) does not qualify as GPAI, even if it exceeds the 10²³ FLOP threshold.

Conversely, a smaller model may still count as GPAI if it shows “significant generality” and can competently perform diverse tasks.

The focus is not purely quantitative: capability breadth and generative versatility remain decisive.

When Does a GPAI Become a Systemic Risk GPAI?

The AI Act then introduces a risk category: the “general-purpose AI model with systemic risk.” This label applies when a model’s capabilities or scale could produce large-scale societal or market impact, for instance, by influencing information flows, security, or democratic processes.

A model falls into this category when:

  • Its training compute exceeds 10²⁵ FLOP, or
  • The Commission designates it as such, based on the criteria in Annex XIII (e.g. potential for mass misuse, centrality in the AI ecosystem, or observed societal effects).

Once classified, the provider must:

  • Conduct ongoing systemic risk assessments and mitigation (Article 55(1));
  • Implement strong cybersecurity and governance measures;
  • Notify the Commission if the threshold will be reached; and
  • Report serious incidents.

This “systemic risk” layer effectively mirrors the logic of systemic financial regulation, where a few key players (think: foundation model developers) carry outsized influence on market stability.

A structured approach to determining the risk category

The Guidelines outline a pragmatic cascade for classification, starting at the AI model:

  1. Estimate Training Compute

If <10²³ FLOP → Not GPAI (narrow or task-specific models).

If ≥10²³ FLOP and capable of generating language, image, or video → Likely GPAI.

  1. Assess Model Generality

Can it perform a wide range of tasks beyond its training purpose? → GPAI

If not, it remains outside the GPAI scope, despite high compute.

  1. Check for Systemic Risk Indicators

If compute ≥10²⁵ FLOP → Presumed Systemic Risk GPAI.

If below but exhibits comparable impact (as per Annex XIII) → May be designated by the Commission.

  1. Notification and Contestation

Providers must notify the Commission when thresholds are met, without delay and in any event within two weeks, even before training ends when it’s reasonably foreseeable.

They may contest classification by demonstrating that the model’s capabilities do not amount to high-impact systemic risk, though mitigations alone are insufficient to escape this label.

What about Open Source models?

1. Which Open Source models?

A GPAI model only benefits from the open-source exception if it is released under a licence that genuinely allows access, use, modification, and redistribution of the model (including weights) and is publicly available.

In practice this means:

  • licence terms must grant those freedoms;
  • access/use/scale is not monetised; and
  • the model’s parameters and basic architecture are published.

If any of these fail, the exceptions don’t apply.

2. Why is this relevant?

Certain Article 53(2) and Article 54(6) duties can fall away for non-monetised open source models (e.g. elements of technical documentation aimed at downstream providers, the authorised-representative requirement for non-EU providers). These are targeted exemptions, not a blanket waiver.

3 Obligations that still apply to open-source GPAI.

Even with the exception, providers must comply with baseline transparency under Article 53(1), including:

  • a copyright policy (Art. 53(1)(b)); and
  • a sufficiently detailed summary of the training data (Art. 53(1)(c)).

Other general Chapter V duties remain unless expressly exempted.

4 What about systemic risk GPAI?

If a model is presumed to have systemic risk (e.g. training compute at or above 10²⁵ FLOP) or is otherwise designated systemic-risk, all systemic-risk obligations apply in full regardless of open-source status. Providers must notify the Commission within two weeks once the threshold is met or reasonably foreseeable to be met, then carry out risk management, evaluations/adversarial testing, incident reporting, and cybersecurity measures.

5 What if I alter an open source GPAI?

Substantial modification (e.g. significant fine-tuning) can make the modifier the provider of a new GPAI model with its own duties. The Guidelines offer an indication: using more than one-third of the original training compute suggests you are a new provider. If the original compute is unknown, the fallback is one-third of the GPAI presumption threshold.

6 Monetisation pitfall

Charging for access, usage, or scale (or gating via equivalent monetised mechanisms) typically disqualifies the open-source exception.

Conclusion

The GPAI Guidelines by the Commission make it much easier to see where a model stands under the AI Act. A model becomes general-purpose when it’s trained on a large and varied dataset (usually above 10²³ FLOP) and can perform a broad range of tasks across contexts, not just a single specialised one. Being able to perform such task relies on breadth of capability and generative versatility, not purely size.

If a GPAI reaches extreme scale (training compute beyond 10²⁵ FLOP) or shows wide societal impact, it moves into the category of systemic risk GPAI. At that point, stricter obligations apply: ongoing risk assessments, cybersecurity measures, incident reporting, etc.

Open-source models get partial relief, but only if they are genuinely open, with accessible weights, architecture, and non-monetised use. Once access is limited or commercialised, the exemptions fall away. Anyone substantially modifying an existing GPAI effectively becomes the provider of a new one, with corresponding duties.

Share this:

Written by

Enzo Marquet

Enzo Marquet

Hi! How can we help?

In need of internal privacy help or an external DPO? Reach out and we’ll look for the best solution together with you.