Blogpost

ChatGPT & GDPR: Do They Go Hand in Hand or Not?

ChatGPT and GDPR
ChatGPT & GDPR: Do They Go Hand in Hand or Not?

The use of ChatGPT and other similar programmes is steadily increasing. More and more companies want to apply it to various tasks and processes.

In this blogpost, we will explain the AI Regulation (AI Act) and its implications for programmes such as ChatGPT. We’ll also look at how we can use ChatGPT in a GDPR-friendly manner.

The AI Act in a Nutshell

Until recently, there was very little legal initiative regarding the use of AI (which includes ChatGPT). The technology is evolving at a rapid pace. Gradually, change is coming, and in the meantime, a legislative initiative has been taken with the draft AI Regulation (the AI Act), although member states will be given two years to implement it.

The initial proposal for the AI Act was goal-oriented; AI systems were classified as (i) prohibited, (ii) high risk, (iii) low risk or (iv) without risk, based on the purpose for which they were designed.

Depending on the classification of the AI system, different rules would apply. In concrete terms, this means that the first version of the proposal only looked at the purpose for which the providers developed the AI system, without considering how the user would apply the AI system.

In November 2022, when ChatGPT gained mainstream attention, the EU legislator realised that this logic couldn’t be applied to AI systems designed for general purposes. ChatGPT can be used for different purposes depending on the user’s input, so it was necessary to amend the proposal to address these recent developments.

What is an AI System According to the AI Regulation?

According to the definitions in Article 3 of the draft AI Regulation, an AI system is:

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”

ChatGPT will certainly be classified as an AI system. However, it needs to be determined under which category of AI system it falls to know which obligations and/or restrictions apply to the use of ChatGPT.

Under Which Risk Class Does ChatGPT Fall?

ChatGPT does not fall under the prohibited practices in the field of artificial intelligence, as provided for in Article 5 of the AI Regulation, but may fall under high-risk AI systems depending on how this technology is used by the user.

Article 6 of the AI Regulation states the following:

“Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;

(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.”

ChatGPT does not meet the cumulative conditions set out in the first paragraph, as it is not a safety component of a product, nor does it fall under any harmonisation legislation in Annex II.

To determine in which situations ChatGPT can be seen as a high-risk AI system, we need to look at Annex III of the Regulation, which describes the purposes that result in an AI system being classified as high-risk.

But as mentioned earlier, it is the user who decides how an AI system not developed for a specific purpose will be used. Consequently, it is the user who determines which risk classification applies, depending on the purposes for which ChatGPT is used. For example, the use of ChatGPT can be classified as high-risk when used in the context of recruitment procedures and personnel management:

Employment, workers management and access to self-employment:

(a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;

(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships;”

When the AI system is classified as high-risk, the Regulation imposes additional quality requirements on the data. Paragraph 1 of Article 10 states the following:

“High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.”

These requirements include, among others, the quality of the datasets used for training (relevant, representative, free of errors and complete), applying the correct data governance and management, etc.

In addition, ChatGPT also falls under Article 52 of the Regulation as it is aimed at natural persons. This results in an additional transparency obligation. Natural persons must be informed that they are interacting with an AI system unless the circumstances and context of use make this clear. Furthermore, safeguards must be in place to prevent output that conflicts with EU regulations, and a summary of the training data must be provided.

ChatGPT and GDPR: Do They Go Hand in Hand or Not?

It’s important to know that some characteristics of AI systems are incompatible with GDPR principles:

PrincipleAI Characteristic
Minimal Data ProcessingAI is trained using extremely large datasets. (For example, ChatGPT is trained on a dataset of 300 billion words.)
Purpose LimitationDuring the development of AI systems, it is often difficult to predict the various purposes for which they will be used. (For example, ChatGPT is a general-purpose AI system.)
Storage LimitationUnlike the initial training data, which can be easily deleted, it is less straightforward to adjust an AI system in such a way that its output is no longer based on personal data for which the retention period has expired.
TransparencyMany AI systems are considered “black boxes,” making it nearly impossible to explain their internal processes in understandable terms for all users.
AccuracySome AI systems trained on publicly available data (such as ChatGPT) can produce inaccurate outputs. This may be due to incorrect training data or a phenomenon known as AI hallucination, where the AI system invents information to provide an answer.

Of course, the AI Regulation does not detract from the GDPR, as the obligations arising from the GDPR also remain applicable.

Thus, the use of ChatGPT must have a legal basis, necessary measures must be taken to limit the risks for data subjects, DPIAs and TIAs must be carried out, etc.

Conclusion? ChatGPT is not prohibited in principle, not even in Italy anymore, but of course, it must be handled with the necessary caution. A distinction can be made between the purposes for which ChatGPT is used.

Personal Data of (Potential) Employees

The use of ChatGPT where personal data of employees and applicants are processed should be avoided in practice.

On top of the fact that the use of ChatGPT in this context is considered high-risk, finding the correct legal basis for this processing is problematic. The only possible legal basis for processing is consent. However, practice shows that free consent in a power relationship such as that between an employer-employee and an employer-applicant is very unlikely. In this case, consent is only valid when withholding it by the employee or applicant can never lead to negative consequences.

It is therefore not recommended that AI systems such as ChatGPT be used when this results in the processing of personal data of staff members and applicants.

Personal Data of Suppliers, Customers, and Consumers

Whether the use of ChatGPT for processing personal data of suppliers, customers, and consumers is permitted will depend on the specific situation.

If it is possible for the data subjects to give consent freely, it is possible to use ChatGPT. Of course, in such a case, all other requirements of the GDPR must also be met (including possibly carrying out a DPIA, TIA, etc.). Data subjects must always be clearly and correctly informed about this.

Sensitive or Confidential Personal Data

Don’t forget that OpenAI, the company that developed ChatGPT, is a private company. So you have no guarantee that when you enter sensitive or confidential data into ChatGPT, this information will remain secret. Your input is used by OpenAI to train the algorithm and is thus also available to other users of ChatGPT.

For example, in April 2023, Samsung reported a leak of confidential data related to semiconductor development after employees used ChatGPT to try to solve a code error.

Conclusion The use of ChatGPT and other AI applications can be very interesting. With the AI Regulation, more guidance is now being provided on how this technology can be applied. However, it remains important to look not only at the AI Regulation but also at the GDPR when this technology will be applied in practice.

If you want to use ChatGPT within your company, it is therefore important to ensure that all obligations, both from the AI Regulation and from the GDPR, as well as possibly other relevant legislation, are met.

Share this:

Written by

Simon Geens

Simon Geens

Hi! How can we help?

In need of internal privacy help or an external DPO? Reach out and we’ll look for the best solution together with you.