Utilizziamo cookie tecnici per personalizzare il sito web e offrire all’utente un servizio di maggior valore. Chiudendo il banner e continuando con la navigazione verranno installati nel Suo dispositivo i cookie tecnici necessari ai fini della navigazione nel Sito. L’installazione dei cookie tecnici non richiede alcun consenso da parte Sua. Ulteriori informazioni sono contenute nella nostra Cookie Policy.



Deploying Generative AI in the Workplace: Anticipating legal risks, implementing solutions

PrintMailRate-it

published on 27 November 2023 | reading time approx. 9 minutes


In under a year, generative artificial intelligence has swiftly evolved from a tool initially embraced by the general public to now firmly establishing itself in the realms of business.

AI serves as a versatile editorial assistant, adept at tasks such as email drafting, social media post management, document analysis, and translation. Further AI tools can also rapidly generate images, videos, music, databases and even computer codes within minutes.

The manifold capabilities of AI not only enhance productivity and streamline processes but also result in time and cost savings, facilitate the analysis of extensive datasets, and contribute to the improvement of customer service.

However, it is crucial for companies and their employees to recognize that the widespread use of AI also introduces risks, including potential infringements on legitimate rights and both general, public, corporate as well as private interests.

These risks notably encompass heightened threats to privacy rights, trade secrets protection and intellectual property rights and a dependency on “profiling” linked to external factors such as developers’ strategy, biases or nationality, embedded in algorithms.

This holds particularly true given that current AI is rooted in an American or Asian worldview which may not necessarily align with that of Europe or of other regions globally. Consequently, traces of a culture embedded in AI may not reflect our ethical, moral, or even legal standpoints.

Acknowledging this, it becomes increasingly crucial for enterprises and their employees to exercise caution and approach the utilization of AI tools with a thoughtful and well-defined framework.
By doing so, they can navigate potential legal and financial risks, ensuring that the incorporation of AI aligns with responsible practices while leveraging the full potential of these tools.

Anticipating risks

The use of AI, especially within corporate settings, gives rise to various legal risks, particularly concerning issues related to:
  • Image reproduction rights (it being noted that “image” is also a personal data);
  • Rights pertaining to data privacy;
  • Intellectual property rights, such as copyright or database.

These risks are no longer merely hypothetical, as current events abound with illustrative examples. In the United States, Scarlett Johansson is pursuing legal action against an artificial intelligence entity for utilizing and modifying her image without permission. The U.S. Copyright Office has recently rendered a decision addressing the intriguing question of whether a comic strip titled “Zarya of the Dawn”, entirely generated with the Midjourney AI system, qualifies for copyright protection as an original work. Recently, a new complaint has been filed against OpenAI and Microsoft, concerning the training of the conversational robot
ChatGPT, which allegedly involved the personal data of hundreds of millions of internet users.

These real-world instances underscore the tangible legal challenges emerging from the intersection of AI, personal data law and intellectual property law, prompting a closer examination of the legal implications and current framework surrounding the use of AI-generated content.

Intellectual Property violations

Unmonitored use of generative AIs may result in violations of intellectual property rights, exposing a company to legal disputes. Using AI to generate works may indeed entail risks and raise questions regarding notably authorship and reproduction and distribution rights: do data collection and their integration into a generative AI tool constitute acts of infringement? who is liable when works generated by an AI tool allegedly infringe existing intellectual property rights, and how can one limit the risks?

French intellectual property law already allows addressing these questions, but there are ongoing proposals to better regulate the copyright of content generated by AI. A text is currently under discussion at the European level, the AI Act, which is a proposed regulation aimed at regulating the use and commercialization of artificial intelligence within the EU. This project was approved on June 14, 2023, by the European Parliament and its adoption is scheduled for late 2023 to early 2024, with a deferred application of 18 to 24 months after its entry into force. It notably proposes to require AI platforms specifically designed to generate content such as complex text, images, sound, and videos to (i) design the model to prevent it from generating illegal content and (ii) provide the public with sources of the generated content.

These draft regulations are in addition to the European and national texts already passed in the EU, relating in particular to the obligations of Internet and IT players in terms of control, moderation and active management of the online tools they offer to the general public and professionals alike.

Some sophisticated AI tools are even capable of reproducing voices and facial expressions from a few images and audio files (deepfakes), meaning that the use of AI can easily not only lead to infringements of intellectual property rights and personal rights, but may even extend to identity theft and privacy violations.

Data Privacy violations

With regard to risks concerning personal data, it should be remembered that AI, in particular, generates different types of data processing, including automated processing and profiling of personal data, the organization of which may require, based on GDPR and similar regulations, a much stricter framework than conventional human or material processing and, for example, Privacy Impact Assessments.

In addition to the risks generally associated with the processing of personal data, treatment based on AI systems present specific risks that should be taken into account:
  • risks for individuals related to misuse of data contained in the learning database, especially in the event of data breaches;
  • the risk of automated discrimination caused by bias in the AI system introduced during development, leading to lower performance for certain categories of people;
  • the risk of generating incorrect fictional content about a real person, particularly significant in the case of generative AI systems, and potentially impacting their reputation;
  • the risk of automated decision-making caused by automation bias or confirmation bias if necessary explicability measures are not taken during the development of the solution or if a user using the AI system cannot make a contrary decision without prejudice;
  • the risk of users losing control over their publicly available and freely accessible online personal data, as large-scale collection is often necessary for AI system learning, especially when collected through harvesting or “webscraping”;
  • risks related to known attacks specific to AI systems; 
  • risks related to the confidentiality of data that can be extracted from the AI system.

In the complex and sensitive domain of artificial intelligence, the French CNIL and the European Union are actively addressing the need for regulation.

The French CNIL published a self-assessment guide along with guidelines and frameworks to help carrying out a data protection impact assessment. Most AI solutions used by a company as an internal working tool should be regarded as a possible processing of personal data, with specific and sensible risk relating to misuses or breach of such data. As such, it should be compliant with GDRP and follow the 13 or 14 steps of compliance.

Besides, the abovementioned proposed EU regulation on AI seeks to establish a legal framework for the development and deployment of AI systems. This regulation categorizes AI systems based on their level of risk and introduces specific requirements accordingly. Building on this regulatory initiative, two additional projects are also in progress, namely two directives focused on addressing the issue of liability concerning artificial intelligence.

Implementing solutions

While the use of AI presents intriguing possibilities and tangible benefits (without forgetting, however, that it raises major philosophical and sociological questions), ranging from enhanced productivity to innovative problem-solving, it is equally imperative to address the inherent financial and legal risks and complexities associated with its deployment in the workplace.

Navigating the use of generative AI requires a multifaceted approach, combining internal best practices with the implementation of a thorough process of prohibitions and authorizations, developed in collaboration with legal experts.

As to internal processes and organization of a company, it is essential to adhere to common-sense guidelines, such as:
  • avoid mentioning or submitting confidential information or elements in IA tools;
  • implementing robust internal review processes and policies before dissemination;
  • standardize the critical evaluation of the obtained results before their dissemination.

Nonetheless, good practices alone may not suffice to mitigate the abovementioned risks.
Indeed, the processing of data via AI inherently involves a very specific data processing, requiring the need for conducting and formalizing data protection impact assessments in the course of defining and organizing the corresponding data processing.

A number of specific measures and legal documents will be needed in order to correctly frame the use of such new technology, in relation to Data privacy regulation.

Therefore, as always when implementing new technologies in a professional context, you need to be very careful and anticipate the risks, by putting in place organization, security and precise rules upstream.
To this end, it is strongly recommended to have an audit conducted by a legal expert.

The conduct of an audit of AI usage within companies will then notably allow for:
  • mapping out the landscape of AI applications;
  • conducting a risk analysis in relation to anticipated risks and contractual frameworks, balanced with the expected benefits of AI use;
  • implementing an “AI Charter” or updating your existing IT Charter, in order to raise awareness among employees and other stakeholders in the company and compel them to adhere to new rules, under the threat of sanctions, but above all by making them aware of the risks incurred by the careless use of these tools.

Subsequently, for scenarios involving the creation or implementation of AI solutions as an internal tool for your employees or a product for your clients, the focus will need to be on:
  • determining the legal qualifications of the tool or product;
  • establishing a compliance roadmap based on legal qualifications and identified risk levels;
  • conducting an in-depth analysis of the AI system's compliance, with particular attention to CNIL guidelines, including with respect to image rights.

In wrapping up our look into how AI affects workplaces, a key idea stands out: businesses need to be more socially responsible. Rather than solely considering immediate impacts, it is advisable for companies to contemplate the extent to which they want AI to be involved in the workplace. This means taking a close look at the ethical and legal boundaries when it comes to mixing technology and jobs. Beyond just looking for quick benefits, businesses will need to think carefully about the extent to which they are prepared to let technology influence the way they work and to assume a certain level of risk.

It is a call for enterprises to actively shape how AI impacts workplaces, making sure it lines up with long-lasting ethical, social, and human values.​

DATA PROTECTION BITES

author

Contact Person Picture

Frédéric Bourguet

Avocat

Associate Partner

+33 1 8621 9274

Invia richiesta

Contact Person Picture

Raphaëlle Donnet

Avocate

Junior Associate

+33 1 7935 2542

Invia richiesta

RÖDL & PARTNER FRANCE

​Discover more about our offices in France. Read more »
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu