We use cookies to personalise the website and offer you the greatest added value. They are, among other purposes, used to analyse visitor usage in order to improve the website for you. By using this website, you agree to their use. Further information can be found in our data privacy statement.



Artificial Intelligence and GDPR: 9 key takeaways

PrintMailRate-it

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​published on 25 June 2025 | reading time approx. 10 minutes


Artificial Intelligence (AI) is no longer a futuristic notion — it is already embedded in our daily business practices. Whether through in-house developments or third-party solutions, AI powers tools like predictive recruitment platforms, industrial maintenance systems, customer service chatbots and so much more. These technologies hold great promises for boosting operational efficiency and competitiveness.

But this innovation comes with risks and responsibilities. Many AI systems process personal data, making them subject to the General Data Protection Regulation (GDPR), which sets strict rules for collecting, using, and reusing such data, and the European Regulation on Artificial Intelligence (AI Act), which introduces additional obligations based on risk levels.

To support companies in developing and deploying AI responsibly, the French Data Protection Authority (CNIL) has issued updated practical guidance — first in 2024 and more recently in early 2025. This guidance outlines the key steps and principles to follow when designing or implementing AI systems that process personal data.

1. PURPOSE LIMITATION: know what you are doing and why

The GDPR requires that the collection of personal data must have a specific, legitimate, and explicit purpose defined before any processing begins.

For AI projects, this means clearly stating: 
  • What the system is meant to do — for example: automatically screen CVs, categorize customer service emails, or detect recurring maintenance issues;
  • Why is this being done — such as improving recruitment efficiency or optimizing support operations.


Any change in the intended use, in particular reusing data for new purposes, must be carefully tracked, documented, and justified and subject to updated information to the concerned data subjects.


This rule applies whether the AI system is developed for a defined operational use (e.g. screening or analyzing job applications) or for a general-purpose system (such as a voice recognition model trained on call center recordings). In the latter case, even if the operational use is not yet fully defined, the purpose must still be described precisely, based on the type of system and its foreseeable capabilities (e.g., transcribing employee–customer interactions for quality monitoring or voice-commanded support tools).

2. Define YOUR ROLE under the GDPR: know your obligations

When developing AI systems using personal data, you must determine whether your company acts as a data controller or as a data processor: 
  • You are a controller if you decide and control the purpose and means of processing (i.e. why and how the data is used). This includes situations where you independently build training datasets or develop models for your own use. If decisions are made jointly with others, you are joint controllers, and shared responsibilities must be formalized (e.g., via contract);
  • You are a processor if you process data strictly on behalf and for the account of a client who defines both the purpose and means. A data processing agreement is then required, and you must follow your client's instructions, ensure data security, avoid any other use of the data and generally comply with the obligations of a data processor under GDPR.

Your role defines your obligations: controllers bear full responsibility for GDPR compliance, including establishing a legal basis, providing information to data subjects, and conducting impact assessments. Processors, on the other hand, must comply with instructions, maintain data security, and avoid any other data use. Processors have also their own level of responsibility for GDPR compliance, including refusing a non-compliant processing and advising the controller.

3. LEGAL BASIS and data reuse: avoid the pitfalls

Any AI project involving personal data must be grounded in a valid legal basis under the GDPR. Depending on whether data is collected directly, obtained from public sources, or repurposed from existing datasets, additional checks are often necessary.

For direct collection, you must clearly identify and document the legal basis supporting the processing (most often legitimate interest, or consent) and ensure transparency towards data subjects.

When using data obtained from public sources, it is essential to verify that the data was collected lawfully and that its reuse for AI respects the original context and the data subjects’ rights.

For repurposing existing datasets, a compatibility assessment is mandatory to confirm the new use aligns with the original purpose. If not, a new legal basis is necessary.

The CNIL stresses that evolving technical or operational needs alone cannot justify data reuse without a valid legal basis. In all cases, transparency with data subjects about the legal basis and data reuse (so, prior information) remains a strict requirement.

4. DATA MINIMIZATION: collect only what is strictly necessary

AI does not need all available data – only what is strictly required to meet your defined objective. 

The GDPR principle of data minimization means collecting and processing data that is adequate, relevant, and limited to what is necessary for the purpose you have set. In practice:
  • Developing an HR assistant to pre-screen candidates? You probably do not need their full personal address or unrelated personal details;
  • Training a predictive maintenance model? Individual operator IDs or personal performance data may not be necessary;
  • Building a chatbot for customer support? Voice recordings or photos are rarely essential; anonymized transcripts often suffice.

A tricky obstacle is that it is not always easy to know exactly what data will be processed in the AI tool and to whom it will be sent.

The CNIL recommends using relevance grids and regularly reviewing data collections to ensure each data element has a clear and justified purpose. We recommend to be very demanding in your contract negotiations with your AI solution provider to ensure that they provide you with all the necessary information and guarantees regarding the compliance of their solution.

Always ask: is this data truly necessary for achieving the project’s objectives?

5. RETENTION PERIODS: define, monitor and limit data storage

Under the GDPR, personal data cannot be kept indefinitely. You must define, in advance, how long personal data will be retained based on the purpose of processing. And this is not always easy, provided that your supplier provides you with the necessary and accurate information.

Applied to AI, this involves two separate phases, which should be clarified also by your provider:
  • During development, ensure that data retention is planned and monitored over time. Individuals must be informed about how long their data will be stored — typically through a privacy notice;
  • After deployment, once data is no longer needed for operational purposes, it should be deleted. However, it may be retained for system maintenance or improvement, provided strict safeguards are in place, such as restricted access and separate, secure environments.

In some cases, keeping training data may be justified to detect bias or conduct audits. If so, retention must remain limited to what is strictly necessary and supported by strong security measures. 

Where possible, storing metadata or statistical summaries may be a preferable alternative to keeping raw personal data.

6. DATA TRANSFERS outside the EU: no AI without safeguards
AI systems often rely on cloud services or platforms hosted outside the European Union, exposing personal data to jurisdictions with lower privacy protections. 

For example, an HR software provider may use a natural language processing service hosted in the United States, or a predictive maintenance tool might operate on servers located in Asia.

Any transfer of personal data beyond the EU must comply with GDPR requirements by relying on a valid legal mechanism, such as Standard Contractual Clauses or Binding Corporate Rules. Additionally, the transferred data must benefit from protections equivalent to those under the GDPR, including encryption, data segregation, and regular audit. A written data transfer agreement should be concluded (or a clause in a more global contract) in order to clarify these steps.

Organizations should carefully verify their providers’ commitments to GDPR compliance and secure appropriate contractual guarantees before transferring data internationally.

7. DPIA (Data Protection Impact Assessment): a must for high-risk AI

A DPIA helps identify and assess privacy risks related to data processing, enabling you to implement measures that reduce those risks to an acceptable level, including defining security safeguards.

A DPIA is mandatory if the processing is likely to result in a high risk to individuals' rights and freedoms. This is often the case in AI projects, since AI is generally a ‘new or emerging technology’ and also generally meets a second criterion that qualifies it for a DPIA, such as:
  • Processing sensitive or large-scale personal data;
  • Involving vulnerable individuals (e.g. minors, elderly, persons with disabilities);
  • Combining multiple data sets or performs data cross-matching;
  • Raising risks such as data misuse, discrimination, surveillance, or ethical concerns. 

For AI systems with a clearly defined use, the DPIA should cover the entire lifecycle: from design to deployment and operation. For general-purpose AI, the DPIA should focus on the development phase and be shared with downstream users so they can perform their own risk assessments.

Failing to run such a DPIA or to anticipate the compliance steps, risks to consider include: data confidentiality breaches, misuse by employees or hackers, automated discrimination, false or misleading outputs, lack of human oversight, loss of user control over data, AI-specific attacks, negative reputation and serious ethical concerns.

Based on DPIA results, implement measures such as encryption, data minimization, anonymization, privacy-by-design techniques (like federated learning), tools supporting data subject rights, audits, organizational controls, ethical governance, and thorough documentation.

8. INFORMING individuals: key obligations and best practices

Organizations must inform individuals (or update the existing information) when their personal data is used by an AI tool, especially to train AI systems. This information needs to be clear, accessible, and tailored to the specifics of AI.

If data is collected directly, individuals must be informed at the time of collection. For indirect collection, information must be provided as soon as possible and no later than the first contact or within one month.

Notices must specify the identity of the data controller, purpose of processing, data types, recipients, transfers, retention periods, and individuals’ rights. In the context of AI, it is essential to explain how data trains the model and distinguish between datasets, models, and outputs.

The CNIL stresses that information should be provided directly (via email, forms, or letters) when possible. If individual notice is not feasible, a general public notice (e.g., on a website) may be acceptable. Notices must be accessible, clear, and prioritized by relevance.

Individual notice is, however, not required if individuals have already been informed or if contacting them would be disproportionately difficult. In such cases, a general notice is acceptable, provided appropriate safeguards are in place (e.g., pseudonymization, limited retention, impact assessments).

9. Exercising INDIVIDUAL RIGHTS: anticipate potential requests

Under the GDPR, individuals have core rights over their personal data, including access, rectification, erasure, restriction, and objection. These rights apply whenever personal data is processed, including its use in AI training datasets or potential retention within AI models.

However, applying these rights in the AI context raises specific challenges:
  • Data should not be stored in an easily retrievable format;
  • It is often difficult to identify whether a model contains personal data linked to a specific individual;
  • Technical and financial burdens may be significant.

To address these issues, the CNIL encourages all reasonable efforts to integrate privacy protections at the model design stage, particularly regarding training data. This includes:
  • Anonymizing models, where doing so does not conflict with the model’s intended purpose;
  • Developing technical safeguards to prevent confidential personal data from being revealed by the model.

While cost, technical limitations, or practical difficulties may justify restricting the exercise of certain rights, such refusals must be justified and proportionate. 

The CNIL acknowledges that it will assess whether reasonable efforts were made and may allow extended timeframes for compliance in complex cases, but also stresses the need to stay up to date with scientific and technical developments, as the state of the art is evolving rapidly.

Ensuring GDPR compliance for an AI project is not just a legal formality: it’s a strategic approach that protects your business and builds user trust. It is all about adopting the right habits:
  • Define a clear framework (purpose, legal basis, retention);
  • Equip yourself with simple tools (data grids, records, retention policies);
  • And most importantly, do not neglect individual rights and transparency.

And, of course, in addition to this information relating to personal data, it is increasingly common to provide ‘​AI Charters’ of a technical and ethical nature, which supplement or update an existing IT charter.​

DATA PROTECTION BITES

author

Contact Person Picture

Frédéric Bourguet

Attorney at law (France)

Associate Partner

+33 1 8621 9274

Send inquiry

Contact Person Picture

Raphaëlle Donnet

Attorney at law (France)

Associate

+33 1 7935 2542

Send inquiry

RÖDL & PARTNER FRANCE

Discover more about our offices in France. Read more »
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu