It’s finally here, you have built your state-of-the-art AI system which is exactly what the market and your customers need: a CV sorting tool which only selects the best candidates, an AI system which detects fatigue in a dangerous work environment or a top-notch tool which assesses how well your students are doing during an exam (and not cheating of course).
However, somewhere you have read something about high-risk AI systems and a few steps you must take in order to launch your beautiful system on the marketing. From what you remember, it was a bit complex and confusing.
This is exactly what we are going to address in this blog. An analysis of all the necessary steps you have to take to bring your product on the market.
The timeline? On the 2nd of Augusts 2026, all new high-risk AI systems have to comply with the applicable requirements. Did you already put your high-risk AI system on the market before that? Then your high-risk AI system receives a 2 year grace period.
Overview
Finding the applicable steps is a bit of a scramble across the AI Act. First, we will provide an overview of all steps and then we’ll dive deeper into what they mean and how to potentially tackle them:
- Confirm high-risk status of your AI system
- Ensure your high-risk AI system complies with Chapter III section 2 & 3 AI Act
- Document this compliance through the Technical Documentation under Annex IV
- Conduct an internal or external Conformity Assessment
- Draw up the EU Declaration of Conformity
- Register your high-risk AI system
- Affix a CE-marking
- Register your high-risk AI system
This seems like a lot, yet through effective governance and management during the development of your high-risk AI system, most documentation could already be available.
High-risk status of your AI system
The AI Act introduces a two-step approach to high-risk AI systems. Either your AI system is subject to specific product legislation under Annex I and if not, it could still be high-risk under Annex III:
Annex I — “Linked by legislation”
Some AI systems are already regulated under sectoral EU product-safety laws (called the New Legislative Framework). If your AI is part of one of these regulated products, it’s automatically considered high-risk under the AI Act. These include medical devices, railway, toys and lift, etc.
If your AI component is built into one of those products, it follows the conformity procedure of that sectoral law, while also meeting the AI Act’s general requirements (Articles 8–15).
If you’re not in Annex I, say you’re building a standalone software tool, a recruitment algorithm, or an educational assessment platform, then move to Annex III.
Annex III — “Standalone high-risk systems”
Annex III lists use-cases where AI can significantly affect people’s rights or safety. These systems are treated as high-risk by default. A high-level overview of AI systems that fall under high-risk:
- Biometric identification and categorisation of persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to essential private and public services as well as credit-scoring
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
We refer to Annex III of the AI Act for the full list and nuances. If your AI system covers any of these use cases, it might be high-risk.
Compliance with AI Act
To ensure you can bring your high-risk AI system to the market, we assume you comply with all the requirements set out in Chapter III, Section 2 and 3. We will not dive deeper into these requirements in this blog.
Technical documentation
Before you can conduct a conformity assessment, and to support your EU Declaration of Conformity, you have to prepare the Technical Documentation of your high-risk AI system as per Annex IV, providing an overview of:
- System description,
- Intended purpose,
- Design and development details,
- Risk management documentation,
- Data governance documentation,
- Post-market monitoring plan,
- Instructions of use and human oversight mechanisms.
Keep it in a way that a notified body (or authority) can audit it at any time. We refer to Annex IV for the full list.
Conformity Assessment
Now that the required documents are in place, you can use these documents to conduct a conformity assessment (CA) to prove your high-risk AI system conforms with the AI Act requirements. There are two ways a CA can be conducted, either internally or externally. The difference is that for an internal CA, you conduct the CA yourself through internal controls under Annex VI. The external CA relies on the involvement of an external notified body which will check the conformity under Annex VII.
Internal conformity assessment
When can you rely on the internal CA?
- Your system is listed in Annex III points 2 to 8(e.g. recruitment, credit, education, essential services, etc.). This means point 1 regarding biometric is excluded!
- Harmonised standards or common specifications exist and you applied them properly
- You are not covered by Annex I product regimes
The content of an internal CA can be summarised as follows:
- Verification of the established quality management system
- Examination of technical document to assess compliance with essential requirements of Chapter III, section 2.
- Verification of design and development process and post-market monitoring system in line with technical documentation.
The full content of the internal CA can be found in Annex VI.
External conformity assessment
When does your high-risk AI system have to be verified by a notified body?
- No harmonised standards exist yet OR you didn’t apply them fully
- Your system falls under Annex III point 1: biometric identification and categorisation
- Your AI is a component of an Annex I product already subject to third-party certification (e.g. medical device, machinery, vehicle)
- A regulator or market authority requests an external assessment due to risk.
The content of an external CA can be summarised as follows:
- Overview of the provider and AI system(s) under the same quality management system
- Technical documentation
- How access is provided to the notified body
The full content of an external CA can be found in Annex VII.
EU Declaration of Conformity
Rejoice, your high-risk AI system has passed the EU tests and now the easy part comes, consolidating the information and assessments into documentations for your clients and the end users so they can be sure that your high-risk AI system adheres to the highest standards.
You can draw up the EU Declaration of Conformity, through which you declare your high-risk AI system to be fully in line with the applicable legislation. Keep this declaration available for the market authorities.
The EU declaration of conformity shall contain all of the following information:
- Identification of the provider and AI system
- A statement that the EU declaration of conformity referred to in Article 47 is issued under the sole responsibility of the provider;
- A statement that the AI system is in conformity with the AI Act and all other applicable relevant legislation (product specific, GDPR, etc.).
- Usage of harmonised standards or common specifications.
- Conformity assessment and relevant notified bodies;
- Valid signing.
The full content of the EU declaration of conformity can be found in Annex V.
CE marking
Now that everything is in order, you must show the end users of your high-risk AI system that this very system has been made in line with the applicable legislation. You can do this by applying a (digital) CE marking in line with article 30 Regulation 765/2008.
You can affix a CE marking on the packaging when your high-risk AI system is (part of) a product, or digitally in an interface such as a loading or start-up screen. It must be easily accessible.
Register your high-risk AI system
Finally, the last step before your high-risk AI system can be placed on the market or put into service: the registration.
Depending on the risk classification, a different registration procedure is to be followed:
Who | What | Where | Notes/Exceptions |
Providers of Annex III high-risk AI systems (except Annex III.2) | The provider + the high-risk AI system | EU AI Database (public section unless covered by point 4 of art. 49 AI Act) | Annex III.2 systems follow national-level registration only |
Providers who conclude their system is not high-risk under Art. 6(3) | The provider + the non-high-risk system |
EU AI Database | This prevents “self-downgrading” from escaping transparency |
Deployers that are public authorities (or acting on their behalf), using Annex III systems except Annex III.2 | The deployer + selection of the system + the intended use | EU AI Database | Applies to EU institutions, agencies, bodies, offices, etc. |
Providers and deployers of systems in Annex III points 1, 6, 7 (biometric ID; law enforcement; migration/asylum/border control) | Limited metadata only (as allowed by Annex VIII & IX) | Secure, non-public section of the EU AI Database | Only the Commission & designated national authorities may access this section |
Providers of Annex III.2 systems (e.g., certain safety components already covered by sectoral law) | System registered at national level | National authority databases, not the EU database | These systems are carved out of the EU-level register |
Conclusion
Congrats for making it to the end. While there are numerous steps to take, documenting the development process of any (high-risk) AI system would drastically reduce the administrative burden of launching your high-risk AI system. With appropriate governance and structured procedures, this process can become an integral part of your set-up.