Blogpost

How to handle the use of Large Language Models in your organisation?

LLMs in your organisation
How to handle the use of Large Language Models in your organisation?

Large Language Models (LLMs) like OpenAI’s GPT-4 or Google’s Gemini are finding their way into the workforce. Whether stimulated by the organisation, or used in secret, you can be sure that employees, especially white-collar employees, are using various types of LLMs in their day-to-day work. This is not a bad thing, as AI can automate aspects of repetitive work or ignite creativity, however, its integration also comes with its challenges.  

This blog post will provide guidance through the essential steps and considerations for handling the increased presence of LLMs in your organisation. We will explore best practices for implementation, address common concerns related to data protection and security, and offer strategies for ensuring that LLMs complement human capabilities rather than replace them.  

What are Large Language Models? 

Unless you’ve been living under a rock, you have probably already heard of ChatGPT or Gemini. These AI systems are called Large Language Models or LLMs. As they have been deep trained on extensive data sets, they are able to mimic human language by recognising language patterns. LLMs produce coherent and contextually relevant responses by predicting the most likely sequence of words based on the input they receive. The result? They can generate coherent texts, translate to other languages, summarise and even answer questions coherently, which makes them very useful in the workplace. Their applications are increasingly widespread, appearing in chatbots, virtual assistants, content creation tools, and even as aids in team meetings and brainstorming sessions. 

Putting Large Language Models into practice 

The practical implementation of LLMs in your organisation can go two ways:  

  • Official Implementation: Company-approved and managed LLMs. 
  • Shadow IT: Unofficial or unauthorised usage by employees using publicly available LLM tools. 

We will discuss both possibilities, best practices and their pitfalls. 

Implementation 

Smooth implementation can be achieved by following these eight high level steps: 

  1. First, determine the most suitable applications for your organisation. Review your business processes to find areas where LLMs can be beneficial, such as in customer service, generating content, or analysing data. Identify specific tasks that can be assigned to LLMs. It is recommended to involve your employees and check if they have any knowledge about, or already use, certain LLMs. 
  1. Next, choose the right model. Select an LLM that fits your specific requirements, considering the complexity of the tasks, the model’s capabilities, and the necessary resources.  
  1. Consider ethical and privacy issues. Stay mindful of ethical and privacy concerns related to AI use, and ensure your implementation complies with data protection regulations and promotes responsible AI practices. 
  • Inform users (employees, clients, etc.) about the fact that an LLM is used. 
  • If you are processing personal data, ensure you have a suitable legal basis such as consent or legitimate interest.  
  • Create an acceptable use policy for your employees, informing them on how to use the LLM and specifically which data not to put in the LLM (business sensitive data, personal data, etc.). 
  1. If applicable, gather and refine the data. Collect and preprocess the relevant data needed to fine-tune the selected model, ensuring it matches your business context and delivers accurate, specialised results. 
  • If you use a third party LLM, ensure their data collection and model align with your business needs and context 
  1. Plan the integration carefully. Seamlessly incorporate the LLM into your existing business workflows and technology setup, minimising disruptions and ensuring smooth operation. Provide trainings or workshops for the employees involved in using the LLM, raising their AI literacy at the same time. 
  1. Regularly monitor and assess performance. Continuously evaluate the LLM’s effectiveness using metrics like accuracy, response time, and user feedback to identify and address any areas needing improvement  
  1. Focus on scalability and upkeep. Prepare for the ongoing maintenance and potential expansion of your LLM implementation, taking into account data storage, computational needs, and routine updates.  
  1. Last but definitely not least, improve AI literacy. In line with the AI Act: promote a broad understanding and acceptance of AI technologies within your organisation by providing training and resources, helping employees to effectively use LLMs. Growing AI literacy in your company will become mandatory by 2nd of February 2025! 

Unofficial usage 

It can be expected that employees will use publicly available LLMs such as ChatGPT for their work. The usage of such unapproved AI systems is often referred to as ‘shadow IT’. There are three ways to deal with this as an organisation: 

1. Accept the usage 

The benefit of this approach is that it creates an opportunity to grow AI literacy within your company. Make sure to take a moment to inform your employees on best practices, organise workshops on prompting and let them share their work with each other.  

A pitfall is that your organisation loses control over which data is put into the LLM. This can be customer data or business sensitive data. It is thus important to create an acceptable use policy for LLMs and inform your employees on which data they can(not) use in the LLMs. 

2. Formally prohibit the usage 

Apart from using the acceptable use policy to inform the employees on LLMs, it can also be leveraged to prohibit the usage of any non-approved/external LLM. This practice can be put under the same category as not approving work equipment for any non-professional activity such as watching Netflix or doing your taxes.  

This practice might prove ineffective since employees can simply access the LLMs anyway. Additionally, monitoring employees to prevent this practice can only be done under strict circumstances under labour law and could be disproportionate under the GDPR. 

3. Block the usage 

If your organisation works in a high-risk environment (financial, medical, etc.) where no data may leave your premises whatsoever, it is recommended to outright block access to the most-used LLMs. This can be done companywide, or for specific groups of employees.  

Regulatory considerations when using LLMs at work 

Two specific pieces of legislation must be considered when implementing an LLM: the AI Act and the GDPR. Both mandate that the users of the LLM (be it employees, clients, etc.) are informed about the fact that they are interacting with an LLM. This can be achieved in various ways, by for example, adding an introductory message explaining that an AI system is being used.  

The GDPR necessitates the presence of a legal basis when processing personal data such as consent or legitimate interest. Note that not all LLMs process personal data e.g. when automating information retrieval from a database. Carefully consider which legal basis best suits the LLM processing activity.  

A note must be made for LLMs such as chatbots. These operate on the devices of the users (browser, smartphone) and thus fall under the much stricter ePrivacy Directive. Unless the operation of the LLM is strictly necessary for the requested service, consent of the user is required before the LLM can be activated (similar to cookies).  

Conclusion 

One thing is certain: Large Language Models are here to stay. One of the biggest challenges for companies wanting to automate processes and prepare for the future, will be determining which LLM is best suited for each specific use case. It will be crucial to train and inform your employees to improve their AI Literacy in compliance with the AI Act.  

Regardless of whether your organisation chooses to develop its own LLMs or use the publicly available ones, establishing clear acceptable use policies is a quick win. These policies help ensure employees understand what is permitted and what is not. 

LLMs will certainly face regulatory challenges as they are subject to various legislations (such as the AI Act, GDPR, and potentially the ePrivacy Directive). However, if implemented diligently, they can make work a lot less of a hassle, for sure.    

Share this:

Written by

Enzo Marquet

Enzo Marquet

Hi! How can we help?

In need of internal privacy help or an external DPO? Reach out and we’ll look for the best solution together with you.