Artificial intelligence (AI) has become an integral part of our daily lives. Hardly a day goes by without AI making the news in one way or another. Recently, the European Parliament became the first in the world to adopt a comprehensive regulatory framework for AI. The AI Act aims to ensure that AI is developed and applied responsibly and ethically within the European Union, respecting the fundamental rights and freedoms of its citizens.
The AI Act is often compared to the GDPR. Just as the GDPR established a comprehensive framework for the use and processing of personal data, the AI Act does the same for the use and development of AI. When the GDPR came into effect in 2018, it was seen as groundbreaking legislation with a significant impact on both citizens and businesses. Today, the AI Act is just as revolutionary and impactful as the GDPR was in 2018—perhaps even more so. This blog post examines the relationship between the GDPR and the AI Act and how these two “groundbreaking” instruments interact.
1. What is the AI Act?
The AI Act sets out rules for the development and use of AI. It establishes requirements for the responsible development and deployment of AI to ensure its safety and reliability. Like the GDPR, the AI Act is a “Regulation,” meaning it applies uniformly across the EEA, with only limited scope for interpretation or deviation by member states. This ensures a level playing field across the EEA. Furthermore, the AI Act is horizontally applicable, covering sectors such as government, media, healthcare, and more.
2. The GDPR and the AI Act: Similarities
The GDPR and the AI Act have their own unique focuses and obligations, but they also complement each other. Both are rooted in the idea of ensuring transparency and accountability—whether in the context of personal data (GDPR) or AI systems (AI Act). Both regulations take a risk-based approach: if there is a high risk associated with the use of AI or the processing of personal data, stricter rules and requirements apply. Moreover, the AI Act specifically references the GDPR when an AI system processes personal data. As such, the AI Act and GDPR are often applicable simultaneously.
This shared approach is evident across various aspects, from transparency and enforcement to the requirement for impact assessments in certain scenarios. Below are some examples of similarities between the GDPR and the AI Act.
I. Roles and Responsibilities
The AI Act distinguishes between “providers of AI systems” and “users of AI systems” (deployers). A provider is the entity that develops or commissions the development of an AI system and either markets it or uses it themselves. A deployer is the party responsible for using the AI system. Similarly, the GDPR differentiates between “data controllers” and “data processors”. The former determines the purposes and means of data processing, while the latter acts on the instructions of the controller. Depending on your role, you must comply with different sets of rules under both the AI Act and the GDPR.
It is crucial to determine what role your organisation plays. In the interaction between the GDPR and the AI Act, it is possible that as a provider (developing the AI system), you may also act as a data controller under the GDPR. In this case, you must comply with both the AI Act as a provider and the GDPR as a controller. When a company uses your AI system, it becomes the deployer, and under the GDPR, it may be classified as a data controller. This is because the deployer decides to use the AI system and determines the purpose and means of data processing. Therefore, both the GDPR and AI Act obligations may apply depending on whether you are a controller, processor, provider, or deployer.
II. Focus on Fundamental Rights and Impact Assessments
Both the AI Act and GDPR aim to protect fundamental rights and freedoms and promote transparency. While the GDPR focuses on privacy and data protection, the AI Act covers a broader range of rights, including the protection of the rule of law, democracy, the environment, and public safety.
This focus is reflected in the requirement for a Fundamental Rights Impact Assessment (FRIA) under the AI Act. This is similar to the GDPR’s requirement for a Data Protection Impact Assessment (DPIA). If the processing of personal data poses a high risk to individuals, the controller must carry out a DPIA. The GDPR (Article 35) provides guidelines for determining whether a DPIA is necessary, supported by further guidance from the EDPB and national supervisory authorities.
The AI Act (Article 27) requires deployers to carry out a FRIA for high-risk AI systems. This obligation applies in specific circumstances. For example, if a deployer is a public body or a private entity offering public services, a FRIA is mandatory unless the AI system is used in critical infrastructure (Annex III, point 2 of the AI Act). In cases where the DPIA under the GDPR already covers aspects of the FRIA, the FRIA can build on the DPIA.
III. Focus on Transparency
Both the GDPR and AI Act place great importance on transparency. The AI Act imposes transparency requirements on deployers as well as on individuals interacting with AI.
AI systems designed to interact directly with individuals, such as a bank’s chatbot, must be developed in a way that ensures users are aware they are interacting with AI, unless this is already obvious from the context. Deployers of AI systems that generate synthetic audio, images, video, or text (e.g., deepfakes) must ensure it is clear that the output is AI-generated or manipulated. Providers must make this technically feasible.
Transparency is particularly significant for deployers of high-risk AI systems, who are often data controllers under the GDPR. To meet GDPR requirements, deployers need sufficient information from the AI provider to assess and mitigate risks. Article 13 of the AI Act grants deployers the right to request information from providers. Deployers are also required to use this information when conducting a DPIA.
IV. Oversight, Fines & Extraterritorial Scope
Both the GDPR and AI Act have extraterritorial scope, meaning they apply under certain conditions to companies outside the EEA. Just as an American tech company must comply with the GDPR, it will also need to comply with the AI Act.
Like the GDPR, the AI Act imposes severe penalties for non-compliance. Violations can result in administrative fines of up to €35 million or 7% of global annual turnover per infringement. Minor administrative breaches may incur fines of up to €7.5 million or 1% of turnover. Member states can also impose additional enforcement measures, including warnings and product recalls.
While the European Data Protection Board (EDPB) oversees GDPR enforcement, the European AI Board (EAIB) and AI Office will oversee the AI Act. Member states will appoint two national authorities: a notifying authority and a market surveillance authority, each responsible for specific aspects of compliance and enforcement. Individuals or organisations that believe there has been a breach of the AI Act may file a complaint with the relevant market surveillance authority.
3. What does the overlap between GDPR and AI Act mean for privacy professionals?
Given the overlap between the GDPR and AI Act, as well as their expertise in risk management, compliance, and automated decision-making under Article 22 of the GDPR, privacy professionals, legal experts, and DPOs are likely to be the first point of contact when organisations seek to develop or use AI systems. Since developing AI systems often (though not always) involves processing personal data, both regulations will frequently apply and interact.
Implementing the AI Act will require an interdisciplinary approach. Collaboration with AI and IT experts will be essential for proper compliance. DPOs and privacy professionals, with their experience in transparency obligations and interdisciplinary cooperation, are well-placed to help organisations navigate the AI Act’s requirements.
As the first provisions of the AI Act, including definitions and prohibited AI practices, come into effect on 2 February 2025, organisations and privacy professionals should start preparing now.
At CRANIUM, we are ready to tackle these challenges with you. Our experienced consultants can advise your organisation on the AI Act and its implementation.
Additionally, discover our AI Governance & Compliance Course on the CRANIUM Campus—a comprehensive programme designed to prepare participants to take on the role of ‘AI Officer.’