Table of Contents
Introduction
You apply for a job online, upload your CV, and within seconds receive a polite rejection. You never spoke to anyone, and no feedback follows. Somewhere behind that decision sits software that ranked hundreds of applicants and decided yours was not a match.
These systems are becoming part of everyday life. They help decide who gets hired, who gets a loan, or who is flagged for extra screening at the airport. When technology takes over these choices, human oversight becomes essential. It means that people remain able to understand, question, and correct what automated systems do. This means that there is always someone who can say: “Wait, something doesn’t seem right here.”
European law has built two main protection mechanisms to make that possible. The GDPR gives individuals rights when an automated decision affects them, while the AI Act makes human oversight a design requirement for organisations using or building high-risk AI. Together, these laws try to make sure that automation helps people make decisions rather than replacing them entirely.
In this blogpost, we take a closer look at how the GDPR and the AI Act each approach human oversight, and how the two frameworks complement one another.
The GDPR’s reactive protection
Article 22 of the GDPR protects people from decisions that are made solely by automated means and that have legal or similarly significant effects. Where Article 22 applies, individuals have safeguards: the right to obtain human intervention, to express their point of view, and to contest the decision.
In practice, article 22 shows two clear weaknesses:
First, the scope is narrow. Controllers can avoid the rule if they include even minimal human involvement, even if that involvement is more symbolic than real. Many organisations use this thin layer of review to stay outside Article 22’s scope. In a hiring context, that could mean that no one truly reviews your rejected application. Instead, a recruiter might simply click “confirm” on the system’s decision without questioning it.
Second, the protection is mostly reactive. Individuals must notice the problem, ask for review, and then wait for a human to step in. The safeguard often activates only after the decision has already affected the person. By the time the job applicant realises the system has filtered them out, the position may already be filled.
The Court of Justice has helped by clarifying the right to explainability. People do not get access to the algorithm, yet they are entitled to meaningful information about the logic behind the decision and how changes in their data could have changed the outcome. That supports transparency, but it does not solve the timing problem. It still acts after the decision. For a more detailed look at this judgment and its practical impact, see our earlier CRANIUM blogpost on the Right to Explainability.
The AI Act’s preventive approach
The AI Act takes a step further by focusing on prevention rather than reaction. For high-risk AI systems such as hiring platforms, credit scoring tools, or welfare assessments, Article 14 requires that human oversight is built into both the design and daily use of these systems. Providers must design technology that people can actually control, and deployers must train and empower staff to use that control.
Effective oversight means more than watching from the sidelines. It requires humans to understand what the system is doing, recognise its limits, and step in when something goes wrong. In a recruitment setting, that means not waiting for an applicant to complain but ensuring from the start that someone can review and question automated rejections. These duties are supported by documentation, logging, and continuous monitoring, so organisations can spot and fix problems before they cause harm.
Reading both together
If the GDPR is your safety net, the AI Act is your safety harness. The GDPR catches you when a fully automated decision has already affected you. The AI Act, in contrast, aims to keep you secure before you fall by requiring oversight to be built into the system itself. Together, they form two layers of protection: one reactive, one preventive.
Both laws are risk-based, but the kind of risk they address differs. The GDPR focuses on protecting fundamental rights and freedoms when personal data is used in automated decisions. The AI Act, in contrast, treats risk through a product safety lens, targeting systems that could cause harm or lead to unsafe outcomes if left unchecked. The GDPR applies to fully automated decisions that directly affect individuals, while the AI Act applies to high-risk AI systems, a broader category that can also include tools where humans remain involved but where the potential consequences are serious.
In practice, the two frameworks often work side by side. The GDPR gives individuals rights to question a decision once it happens, while the AI Act requires organisations to prevent harm before it does. It is the difference between a candidate being able to challenge an automated rejection and a recruiter being equipped to notice and correct it before it is sent. Reading them together provides the most complete picture of how human oversight is meant to work in the age of AI.
What this means in practice
Human oversight only works when people can actually use it in real situations. Too often, oversight is treated as a checkbox exercise: a name on a form, a signature at the end of a process, or a last-minute “human review” that changes nothing. Real oversight requires people who are informed, empowered, and supported to question what an AI system produces.
For businesses
- Map your AI systems and identify where Article 22 GDPR and the AI Act may apply. Treat the mapping as living documentation, not a one time inventory.
- Make oversight a real role. Train the people who interact with the system, give them clear decision power, and measure whether they use it.
- Tie documentation to action. Use logging and technical documentation to run regular checks, spot drift, and fix issues. Pair the GDPR’s DPIA with the AI Act’s FRIA where required, so risks are handled both as data protection and as broader rights issues.
For individuals
- If you face an automated decision, ask for a clear explanation of how your data influenced the outcome and request human review. The right exists even when the system looks opaque. Connect your request to Article 22 safeguards and the access rights under the GDPR.
Conclusion
Think back to the job applicant who never heard back or the borrower whose loan was automatically declined. In both cases, people expect that a human can still step in with judgment and context, such as a job applicant being invited to an interview despite the algorithm saying “no”.
The direction of the European Union is clear. The GDPR protects you after a decision is made, while the AI Act aims to prevent the fall in the first place by building oversight into the climb. Yet human oversight is not inherently beneficial. It rests on the assumption that human judgment is always a safeguard, even though research increasingly challenges that belief.
Real oversight is not a label you add to an AI system. It is a responsibility that must be designed, exercised, and proven over time. It’s the rope that keeps humans safely connected when technology takes us higher.
Key Takeaways.
- AI decisions still need a human voice. Even as automation becomes smarter, people still expect someone to step in when things go wrong, whether it’s a job rejection or a denied loan.
- The GDPR reacts. The AI Act prevents. GDPR gives individuals rights after an automated decision hits. The AI Act goes a step further, requiring built-in human oversight for high-risk AI before things go off track.
- Human oversight must be real, not ritual. A checkbox or rubber stamp isn’t enough. Oversight means giving people the training, tools, and authority to challenge what the system says.
- Two laws, one goal. The GDPR and AI Act aren’t competing, they’re complementary. Together, they create layered protection: one for individuals, one for systems.