What is the Cost of Non-Compliancy to GDPR?

AI Conformity Assessment and GDPR DPIA: A comparison
Charlotte Bourguignon

AI Conformity Assessment and GDPR DPIA: A comparison

AI Conformity Assessment

 

Written by: Enzo Marquet

The proposed EU AI Act[1] (AIA) aims to establish a global standard for AI governance like the precedent set by General Data Protection Regulation (GDPR) for data protection. All providers operating, making use of, or introducing AI systems within the European Union, are required to comply with the AIA. Furthermore, the regulatory scope extends to providers and users based in third countries if the outputs of their AI systems are utilised within the EU.

Before a high-risk AI system can be brought into the market, a Conformity Assessment (CA) should be made to ensure compliance with the AIA. Additionally, in the AIA, the GDPR is explicitly mentioned when it comes to processing personal data[2], and when performing a Data Protection Impact Assessment[3] (DPIA) is required. This means there will be an overlap between these two assessments as a high-risk AI system would almost automatically include high risk processing under the GDPR. However, how can these two tools be compared to each other?

In this blogpost, the CA will be explored in its relevant aspects. Then, the obligations regarding DPIA’s will be explained. Afterwards, the link will be made between a CA and a DPIA, where are the overlaps, where are the difference and are there potential synergies?

AI Act: AI Conformity Assessments

The AIA requires organisations to conduct a CA when developing high-risk AI systems. The goal is to verify whether the requirements of the AIA are met. This CA is mandatory and must be approved by the competent body before the system can be launched into the market[4].

High-risk AI systems

To define high-risk AI systems, the classification of article 6(2) AIA must be read in conjunction with Annex III (which includes systems such as biometric identification and categorisation, law enforcement and recruitment tools). It is important to highlight that there is no ‘discretion’ on the responsible party to determine if a CA must be conducted. Either it falls inside the scope, and it is mandatory, or it falls outside the scope.

Scope of the AI Conformity Assessment

The party responsible for the performance of the CA is the ‘provider’[5] of said high-risk AI system. The provider intends to put the system on the market.

In a CA, providers must determine if a system or product adheres to AIA requirements for high-risk AI systems. These requirements, some of which pertain to data protection, include among others, assessing the quality of datasets used – especially when they contain personal data – further: technical documentation and transparency.

Most of these requirements must be integrated from an early stage into the AI system’s design, and providers must ensure compliance, even if they are not the system’s designers or developers.

When to perform an AI Conformity Assessment?

The CA must be conducted prior to introducing the AI system to the EU market or putting it into service. This means that it should occur before the AI system is made available for distribution or putting into, including the first use by the system’s user or the provider’s own use. Member States shall foresee a sandbox for testing, development and validation

of AI systems. It is unclear if a CA is required before this sandbox can be used by the provider.

Furthermore, if a high-risk AI system undergoes significant modifications that affect its compliance with the high-risk requirements or its intended purpose, an updated CA is required.

Providers must maintain ongoing compliance monitoring,

either in technical documentation or the product’s plan, for high-risk AI systems. Notified bodies will periodically audit third-party assessments to ensure quality management system adherence.

GDPR: Data Protection Impact Assessments (DPIA)

DPIA’s are mandatory to conduct when processing of personal data using (new) technology is likely to result in a high risk to the rights and freedoms of natural persons. As set out in the AIA, Users (similar to controllers under the GDPR) of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a DPIA under Article 35 GDPR.[6] There is a clear connection between the two regulations.

High risk processing activity

A DPIA is initiated when a processing activity is deemed likely to result in a high risk to the rights and freedoms of individuals. An evaluation must be conducted by the controller if this is the case.

The EDPB and SAs have compiled lists per Member State which outline processing activities which always mandate the performance of a DPIA. Processing activities that utilise an AI system as defined in the proposed AIA are included in those lists: evaluation, profiling, and automated decision-making.

However, different from the CA, the controller has the discretion to determine whether a DPIA is necessary and to conduct it.

Scope of a DPIA

For a DPIA, the controller must assess how a processing activity may affect individuals’ rights and freedoms. This involves considering factors like the nature, scope, context, purpose, necessity, and proportionality of the processing:

  1. Identify Risks: Identify potential risks to individuals’ rights and freedoms arising from a specific data processing activity.
  2. Assess Risks: Evaluate the severity and likelihood of these risks materializing, considering the nature of the data and the processing context.
  3. Mitigate Risks: Determine and implement appropriate measures to mitigate or reduce the identified high risks to an acceptable level, ensuring data protection and compliance with relevant regulations.

When to perform a DPIA?

This assessment must be conducted before the actual processing takes place and when the processing activity is likely to result in a high risk to the rights and freedoms of individuals. The controller must determine if this is the case and keep the DPIA to act in line with the accountability principle.[7]

Comparison between AI Conformity Assessment and GDPR DPIA

Differences

A CA is focused on ensuring compliance with specific legal requirements, which are considered mitigation measures for high-risk systems. The main goal of CA is to guarantee adherence to the mitigation measures or requirements mandated by the law. An approved CA is required to enter the market.

A DPIA has a slightly different purpose, it serves as a tool for accountability by requiring controllers to assess and make decisions based on risks. It also mandates reporting on the decision-making process. The primary objective of DPIA is to hold controllers accountable for their actions and ensure more effective protection of individuals’ rights. The controller is ‘free’ to decide if and how it will mitigate risk.

The term of provider does not overlap with either controller or processor. While a provider must conduct the CA, it is up to the controller to conduct the DPIA. However, it is likely that the providers under the AIA will be the processors under the GDPR.

Overlaps

Whenever a high-risk AI system involves the processing of personal data, a DPIA will almost certainly be required. Both processes involve assessing risks related to specific systems and have distinct sets of requirements. To prevent redundant work or conflicting conclusions, it is probable that the CA can form the foundation for the DPIA of the controller. If the provider also operates as a controller under the GDPR, then both the DPIA and CA will be carried out by the same entity, reinforcing each other.

Conclusion

While both the CA and the DPIA serve distinct purposes, they align significantly when the high-risk AI system processes personal data. For high-risk AI systems, a CA is mandatory and must be approved prior to entry into market. When this AI system also processes personal data, it is high likely that a DPIA will be mandatory. If the provider also being controller chooses not to conduct a DPIA, their CA can be rejected, resulting in their AI system not being allowed on the market. The AIA as such limits the discretion for providers also acting as controllers to determine if they will conduct a DPIA.

On top of that, for controllers who use providers of AI systems as their processor, the CA forms a starting point for the technical documentation required for their DPIA and risk mitigation.

However, a tool is never compliant on its own and a correct implementation will always be necessary.

[1] COM/2021/206 final, Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative.

[2] Article 10(5) Proposed AI Act.

[3] Article 29(6) Proposed AI Act.

[4] Recital 62 AIA.

[5] Article 3(2) AIA.

[6] Article 29(6) AIA.

[7] Article 5(2) GDPR.

Picture of CRANIUM Employees

Hi! How can we help? 

CRANIUM has expertise on GDPR and other, international privacy laws. Need help or advice? Reach out via the form.

[contact-form-7 id=”3″ title=”Contact form 1″]

For urgent matters, you can contact us via 02 310 39 63.