Blogpost

LLMs in Healthcare: what’s helpful, harmful and what do you need to know?

a person holding a tablet

The rise of generative AI is rewriting how we work.

ChatGPT, one of the most talked-about tools in this AI wave, is already being tested in hospitals, clinics and admin centres across Europe. Its ability to generate human-like text at speed feels revolutionary. But in a field that depends on trust, accuracy and accountability, we can’t afford to let novelty outrun caution.

So, what role should a tool like ChatGPT play in healthcare? What are the real risks? And how can healthcare providers use these tools responsibly, without breaching ethical or legal boundaries?

Let’s get clear on what ChatGPT is, what it isn’t, and what every healthcare organisation should know before plugging an LLM into the workflow.

 

What exactly is ChatGPT?

ChatGPT is a Large Language Model (LLM), a form of artificial intelligence trained on massive amounts of text. It mimics human conversation by predicting likely word sequences, generating everything from emails to code to essays in a matter of seconds.

But it’s not a search engine. It doesn’t ‘know’ facts. It doesn’t think. It doesn’t verify sources.

Instead, it generates output based on probabilities, which means it can sound confident and still be completely wrong. In a marketing context, that’s a typo. In a medical one, it could be dangerous.

 

Why healthcare is interested, and rightly cautious

Healthcare is under strain. Staff shortages, administrative overload and increasing patient expectations mean that any technology promising efficiency deserves a second look. And here, ChatGPT can genuinely help.

Used carefully, LLMs can support healthcare staff with a range of non-clinical, administrative tasks. For example, they can help summarize documents, draft internal communications, or gather general information on a topic. They might even generate templates for policies or support code generation for internal tools.

These uses, when well-defined and monitored, can reduce time spent on repetitive work and free up resources for direct patient care.

But, and it’s a big but, this only works if we draw clear boundaries. Because the moment these tools creep into patient-facing decisions, the risks start to outweigh the gains.

 

Why ChatGPT is not fit for clinical use

Despite its capabilities, ChatGPT is not a medical tool. It is not certified, verified, or safe for diagnosis, prognosis or treatment. And yet, because its tone is fluent, authoritative and confident, it can easily lull users into trusting its outputs more than they should.

Here’s why that’s risky:

1. It can’t guarantee accuracy.

LLMs are trained on internet-scale data, which includes outdated, false or biased content. They don’t cite trustworthy medical sources. They can hallucinate, that is, generate plausible sounding but incorrect answers, without any internal error detection.

2. There’s no explainability.

When an AI suggests a diagnosis or treatment path, we need to understand how it got there. With ChatGPT, we can’t. It’s a black box, even for its own developers.

3. Personal data is at risk.

Any data entered into ChatGPT may be stored or reused. We don’t always know where that data goes, how long it’s kept, or whether it could be re-identified. That’s a red flag under the GDPR, especially in the health sector.

4. It can reinforce bias.

Training data shapes output. If that data contains bias, based on gender, race, socio-economic background or location, the model can perpetuate it. In healthcare, that can lead to unequal treatment and poor patient outcomes.

5. It’s not legally allowed to make decisions alone.

The GDPR explicitly protects individuals from decisions made solely by automated processing that significantly affect them. This includes medical decisions. Human oversight is legally required.

In short, handing ChatGPT any clinical responsibility is premature, unsafe, and potentially unlawful.

 

So, what’s allowed and what’s smart?

The safest, and currently the only GDPR-compliant way to use ChatGPT in healthcare, is for internal, administrative support. This includes drafting non-sensitive text, creating internal summaries, or helping with early brainstorming for policies or content.

But even here, healthcare organisations should not proceed without a clear internal policy. This policy should define exactly what the tool can and cannot be used for, and ensure that staff are trained to spot hallucinations, validate results, and avoid sharing personal data.

For example, summarising a scientific paper on diabetes is fine. Asking ChatGPT to explain a patient’s lab results is not.

 

Build a safe framework for use

Any organisation looking to integrate LLMs like ChatGPT into healthcare processes needs to be proactive. That means:

1. Creating a formal AI usage policy

This AI usage policy should clearly outline permitted use cases, restrictions on personal data, expectations for review and validation, and consequences for misuse. It’s not enough to tell staff “use with caution”. Give them real guidance.

2. Training employees

Many people still don’t understand how LLMs work, or what risks they pose. Invest in awareness training that explains how the technology functions, how bias shows up, and why blind trust is dangerous. AI literacy training and accountability are a must under AI Act.

3. Ensuring GDPR compliance

No AI tool is exempt from Europe’s privacy laws. If personal data is being processed, directly or indirectly, the standard obligations apply. That includes purpose limitation, data minimisation, and safeguards around transfers and retention. For more complex use, a DPIA (Data Protection Impact Assessment) may be required.

 

Conclusion: Human-first, AI-assisted

The future of healthcare will absolutely include AI. But we need to keep our ethical compass in sight.

ChatGPT can ease workloads, speed up admin, and support staff in back-office tasks, as long as it’s used within a responsible, regulated framework. It is not a shortcut for diagnosis, treatment, or patient communication.

The lesson is simple: use AI as a tool, not a substitute. Keep humans in the loop. Put privacy first. And don’t let shiny tech distract from the trust that underpins every patient interaction.

Frequently asked questions

Need help drafting your AI usage policy, or assessing LLM risks in your workflow?

Let’s build it together. CRANIUM offers practical, cross-disciplinary expertise to keep your healthcare organisation safe, smart, and compliant with relevant legislation and regulation.

Share this:

Written by

Charlotte Bourguignon

Charlotte Bourguignon

Anse Boogaerts

Anse Boogaerts

Hi! How can we help?

In need of internal privacy help or an external DPO? Reach out and we’ll look for the best solution together with you.