We use cookies to personalise the website and offer you the greatest added value. They are, among other purposes, used to analyse visitor usage in order to improve the website for you. By using this website, you agree to their use. Further information can be found in our data privacy statement.



Data protection throughout the AI lifecycle – The German Data Protection Conference (DSK) provides guidance for AI systems

PrintMailRate-it

​​​​​​​​​​​​​​​​​​​​published on 23 October 2025 | reading time approx. 4 minutes


The German Data Protection Conference (DSK​​​), an association of independent federal and state data protection supervisory authorities, established specific requirements for the development and operation of AI systems in compliance with data protection regulations back in June 2025. The focus is on technical and organizational measures (TOM) that must be taken throughout the entire life cycle of an AI system – from planning and development to productive use.

This is intended to provide assistance, particularly to developers and manufacturers of AI systems, in implementing the General Data Protection Regulation (GDPR) in the context of technical innovation.


Initial situation

Responsibility under data protection law does not only arise when an "operational" AI system is used, but already in the upstream phases:


Developers and manufacturers are already generally considered controllers within the meaning of Art. 4 No. 7 GDPR even during the design and development phase, e.g. when they use personal data for training, validation, or testing purposes. Anyone who introduces and operates AI systems also bears responsibility under data protection law, in particular for the lawfulness of processing and the implementation of data subjects' rights.


At the same time, the DSK points out the vulnerability of AI systems to risk: automated decisions, large amounts of data, and model logic that is difficult to understand require increased precautions to protect the rights and freedoms of natural persons.


GDPR-compliant design of an AI system throughout its life cycle

The life cycle of an AI system typically includes several consecutive phases, which can be divided into design, development, implementation, and operation & monitoring. Each of these phases has specific requirements, which are derived from the data protection principles set out in Art. 5 GDPR on the one hand and the appropriate level of protection under Art. 32 GDPR on the other.


The basis for this is the so-called Standard Data Protection Model (SDM), which formulates the following guarantee objectives: data minimization, availability, confidentiality, integrity, transparency, intervenability, and non-linking. In its guidance, the DSK translates these objectives into concrete TOMs, which are to be adapted and integrated into the respective life cycle phase of the AI system.


Practical implementation of technical and organizational measures

Consequently, data protection-compliant AI use requires more than just selective measures. It is crucial to implement technical and organizational requirements early on and in a phase-appropriate manner, from the initial idea to ongoing operation:


​Design phase: Consider data protection requirements at an early stage

A key issue in the design phase is data selection. Here, purpose limitation, data minimization, and transparency must already be ensured, for example through documented data sheets ("Datasheets for Datasets") and by examining alternative data types such as synthetic or anonymized data. The subsequent implementation of data subject rights must also be considered at an early stage.


Development phase: Consistent implementation of data minimization

The focus here is on the preparation, training, and validation of AI models. Protective measures against manipulation (e.g., data poisoning) are required, as well as technical precautions that enable the subsequent deletion or correction of data in the AI model, e.g., through modular model architectures or machine unlearning. The DSK makes it clear that the traceability of model decisions and the management of training data are also relevant issues under data protection law.


Implementation phase: Privacy by Default

The AI system should be provided with privacy-friendly default settings ("Privacy by Default"). Controllers must ensure that no unnecessary data is transmitted or stored and that data subjects are informed in an understandable manner about how automated decisions work and their possible effects.


Operational phase and monitoring: Continuous control​

For the operation of AI systems, the DSK emphasizes, among other things, the obligation to carry out continuous quality control and to implement the rights of data subjects, in particular technical precautions for the correction and deletion of personal data in AI models and outputs.

In the case of AI systems with decision-making capabilities, such as credit granting, applicant selection, or risk assessment, it must also be ensured that data subjects are not subject exclusively to automated decisions within the meaning of Art. 22 GDPR. This can be achieved, for example, through technical options for human control ("human in the loop"), such as waiting status and uncertainty indicators.


Classi​​fication under the AI Regulation

Although the DSK is primarily based on the provisions of the GDPR, there are numerous intersections and parallels with the requirements of the AI Act. For example, the requirements described therein regarding transparency, risk assessment, and documentation largely correspond to the provisions of the AI Act. The DSK thus builds a bridge between data protection law and AI regulation.


Conclusion: Data protection as an integral part of AI compliance

The DSK guidance not only provides technical advice, but also creates legal certainty and structure for companies that use or develop AI systems. It is clear that many of the measures recommended today will become regulatory standards tomorrow, not only under the GDPR but also under the AI Act. Those who integrate the recommended measures into their development and compliance processes at an early stage will not only reduce legal risks but also strengthen trust, traceability, and transparency in their use of AI.​

DATA PROTECTION BITES

AUTHOR

Contact Person Picture

Prishila Hanelli, LL.M.

Business lawyer

+49 521 260 748 34

Send inquiry

Contact Person Picture

Sabine Schmitt

Attorney at law (Germany)

Manager

+49 911 9193 3710

Send inquiry

RÖDL & PARTNER GERMANY

Discover more about our offices in Germany. 
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu