We use cookies to personalise the website and offer you the greatest added value. They are, among other purposes, used to analyse visitor usage in order to improve the website for you. By using this website, you agree to their use. Further information can be found in our data privacy statement.



The New Guidelines of the Italian Ministry of Labour and the Strategic Role of AI Governance

​​​​​​​​​​​​​​​​​​​​​​​published on 26 January 2026 | reading time approx. 7 minutes


​On 17 December 2025, the Italian Ministry of Labour and Social Policies (“MLPS”) published the Guidelines for the implementation of Artificial Intelligence in the world of work, marking a particularly significant step in the national path toward the regulation and governance of AI. The document fits within a now well-established European and national regulatory framework, shaped by the European Artificial Intelligence Regulation (“AI Act”), the Italian Strategy for Artificial Intelligence 2024–2026, and the recent Italian Law No. 132/2025 on artificial intelligence. Taken together, these initiatives reflect a shared objective: ensuring that the growing adoption of AI systems takes place in a lawful, transparent, human-centric and socially sustainable manner.

The Guidelines of the Italian Ministry of Labour and Social Policies on Artificial Intelligence​

The MLPS Guidelines do not introduce new binding legal obligations. Rather, they provide an operational framework for companies intending to integrate artificial intelligence into organizational contexts, while at the same time safeguarding workers’ rights and ensuring compliance with existing legislation.
The value of the document lies precisely in this pragmatic approach: AI is not treated merely as a technological innovation, but as a structural transformation destined to affect decision-making processes, work organization, corporate culture and internal accountability mechanisms.

A key message emerging from the Guidelines is that artificial intelligence can no longer be implemented through isolated experiments or disconnected pilot projects. Although many organizations have already introduced AI tools to automate repetitive activities or support data analysis, the Ministry stresses that sustainable adoption requires an overarching strategic vision. AI systems increasingly affect sensitive areas such as recruitment, performance evaluation, workforce planning and operational management. In the absence of coordination and supervision, the risk is not limited to technical inefficiency, but extends to legal exposure, reputational damage and loss of employee trust.

For this reason, the Guidelines strongly encourage the integration of AI into business processes through a structured roadmap. This path begins with an assessment of organizational maturity and data quality, continues with strategic planning and the definition of governance models, and only subsequently leads to experimentation, implementation and continuous monitoring. This approach fully reflects the logic of the AI Act, which requires organizations — particularly when high-risk AI systems are used in employment contexts — to manage risks throughout the entire lifecycle of the technology.

Integration of AI into Business Processes and Organizational Governance

One aspect of particular importance highlighted by the MLPS Guidelines concerns the need to carefully assess the organizational impact of artificial intelligence before and during its implementation. AI systems may alter workflows, redistribute tasks and affect decision-making hierarchies, sometimes in ways that are not immediately apparent. For this reason, companies are encouraged to evaluate not only the technical performance of adopted solutions, but also their effects on professional roles, required skills and working conditions.

The integration of AI into business processes should therefore be accompanied by an organizational analysis and, where necessary, by a review of internal procedures, in order to ensure clarity, fairness and proportionality.

Particular attention is devoted to the protection of workers’ rights in contexts where algorithmic systems support or influence employment-related decisions. The Guidelines emphasize the importance of transparency toward employees, especially when AI is used in recruitment, performance assessment, shift planning or monitoring activities. Workers should be informed of the existence of AI systems, the purposes pursued, the types of data processed and the presence of effective human oversight. Such transparency is not only a legal requirement under European legislation, but also an essential element for preserving trust and social acceptance within the workplace.

The Ministry also highlights the importance of continuous monitoring once AI systems are operational. Artificial intelligence is not static: models evolve, data changes and business needs transform over time. Without constant supervision, systems that were initially compliant and effective may progressively generate unintended bias or inaccurate outcomes. Periodic audits, impact assessments and feedback mechanisms enable organizations to promptly identify emerging risks and intervene in a targeted manner. From this perspective, governance is not a one-off compliance exercise, but an ongoing process accompanying AI throughout its entire lifecycle.

Within this governance framework, increasing attention is being paid to the role of the Chief AI Officer (“CAIO” or AI Officer). Although relatively recent, this role is gaining growing importance, as it enables the centralized coordination of artificial intelligence initiatives. The CAIO is generally tasked with translating corporate strategy into AI projects, ensuring regulatory compliance, overseeing ethical issues and facilitating dialogue among IT, legal, HR and compliance functions. The presence of a central point of responsibility helps avoid fragmented approaches and ensures consistency in the design, implementation and supervision of AI systems.

The MLPS Guidelines explicitly acknowledge the value of this role, noting that the introduction of a function dedicated to AI governance can significantly strengthen internal control. At the same time, the document adopts a flexible approach: depending on the size and structure of the organization, CAIO responsibilities may also be assigned to existing functions. In this context, particular importance is attributed to the potential evolutionary role of the Data Protection Officer (“DPO”).

The DPO already operates within a risk-based governance model and possesses well-established expertise in areas closely connected to artificial intelligence, such as data protection, automated decision-making, transparency, impact assessments and accountability mechanisms. Many of the risks addressed by the AI Act — including bias, explainability and data quality — clearly intersect with the principles of the GDPR. For this reason, the Guidelines recognize that, where appropriate and in compliance with independence requirements, the DPO may also perform coordination functions typical of AI governance.

This does not imply a formal overlap of roles nor the transformation of the DPO into a technical AI manager. Rather, it represents a pragmatic solution that leverages existing compliance structures. In practice, hybrid models may emerge in which the DPO works closely with IT and business functions, contributing legal and risk-management expertise to the organization’s overall AI strategy. This approach is particularly relevant for small and medium-sized enterprises.

The importance of internal governance is further confirmed by the position adopted by the European Commission regarding the role of the AI Officer. In its FAQs on the AI Act, the Commission clarifies that the Regulation does not impose a specific organizational model nor an obligation to appoint an AI Officer. However, pursuant to Article 17(1)(m) of the AI Act, providers of high-risk AI systems must implement a quality management system that includes a responsibility framework defining the duties of management and staff. Moreover, although the appointment of an AI Officer is not mandatory, the AI Act requires both providers and deployers to ensure, as far as possible, an adequate level of AI literacy among personnel involved in the use and operation of AI systems.

This clarification is particularly significant: while the designation of a CAIO is not compulsory, accountability and the internal allocation of responsibilities are mandatory in substance. Organizations remain free to define their own governance models, but they cannot avoid the obligation to demonstrate control, supervision and competence in the use of artificial intelligence.

AI literacy therefore becomes an essential element of responsible adoption. Governance cannot rely solely on policies or organizational charts: individuals involved in the design, implementation and use of AI systems must understand their functioning, limitations and risks, so that human oversight is effective and not merely formal.

The publication of the MLPS Guidelines confirms a profound shift in perspective. Artificial intelligence is no longer regarded as an ancillary technological option, but as a strategic factor directly affecting business competitiveness, employment relations and regulatory exposure. Consequently, organizations are required to govern AI with the same level of attention traditionally devoted to financial, legal and operational risks.

In this scenario, the adoption of structured AI governance models and the establishment of an AI governance role under a Chief AI Officer, or integrated into the role of the DPO, represents not only a path toward regulatory compliance, but also an opportunity to strengthen trust, resilience, and long-term value creation. Companies capable of integrating artificial intelligence within a clear strategic vision, grounded in accountability and human oversight, will be better positioned to face the future of work in a sustainable manner.

Tech & data bites

author

Contact Person Picture

Silvio Mario Cucciarrè, LL.M.

Attorney at law (Italy)

Associate

+39 02 6328 841

Send inquiry

Profile

Contact Person Picture

Nadia Martini

Attorney at law (Italy)

Partner

+39 02 6328 841

Send inquiry

Profile

Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu