Artificial Intelligence: The Action Plan of the French CNIL for Regulating AI Systems
Considering the recent developments in the field of Artificial Intelligence (AI), particularly generative AI systems like ChatGPT, the Commission Nationale de l’Informatique et des Libertés (CNIL) has released an action plan aimed at deploying AI systems that respect individuals’ privacy. Generative AI systems are an emerging area of AI that enables the generation of texts, images, and other content based on user instructions. In this article, we will delve into the details of this action plan and discuss its significance for privacy protection.
The Role of CNIL
In recent years, CNIL has already undertaken extensive work in preventing and addressing the challenges associated with AI. In 2023, CNIL will continue its efforts to regulate advanced cameras and expand its work to encompass generative AI, large language models, and their derived applications such as chatbots. The CNIL’s action plan focuses on four main areas:
- Understanding the functioning of AI systems and their impact on individuals.
- Promoting and regulating the development of privacy-friendly AI.
- Supporting and collaborating with innovative actors in the AI ecosystem in France and Europe.
- Auditing and controlling AI systems to protect individuals.
These measures also aim to prepare for the implementation of the currently discussed European AI regulation.
Protecting Personal Data as the Central Challenge
As AI progresses, challenges related to data privacy and protection of individual freedoms are increasingly emerging. Since 2017, CNIL has addressed the questions raised by these technologies in its report on the ethical challenges of algorithms and artificial intelligence.
Generative AI systems have made significant strides, particularly in the domain of text and conversation, in recent months. Through the use of large language models like GPT-3, BLOOM, or Megatron NLG, and derived chatbots like ChatGPT or Bard, these systems can generate texts that closely resemble human creations. There are also significant developments in the field of image and speech generation. While these models and technologies are already being applied in various industries, their functioning, capabilities, limitations, and the associated legal, ethical, and technical challenges are still subjects of intense discussions.
The CNIL publishes its action plan for the regulation of artificial intelligence to highlight the importance of protecting personal data in the development and use of these tools, with a particular focus on generative AI.
What are Generative AI Systems?
Generative AI systems are capable of creating texts, images, and other content (music, videos, speech, etc.) based on instructions from human users. These systems can generate new content based on the data used for their training. Thanks to extensive training datasets, they can achieve results that closely resemble human productions. However, it is important for users to specify their requests clearly to obtain the desired outcomes. Thus, a specific expertise is evolving regarding the formulation of user prompts (prompt engineering).
The CNIL’s Action Plan in Four Steps
In recent years, the CNIL has undertaken extensive work in regulating AI, considering various types of AI systems and their use cases. Its action plan focuses on four main objectives:
- Understanding the functioning and impact of AI on privacy and individuals’ rights.
- Establishing a legal framework for the use of AI systems that ensures data protection.
- Supporting and collaborating with AI innovators in France and Europe.
- Auditing and controlling AI systems to ensure the rights and protection of individuals.
“I appreciate that the action plan aims to first develop an understanding of how AI systems work and their implications for individuals,” commented Marcus Belke, CEO of the 2B Advice Group, a company group that has been providing data protection solutions for 20 years. “However, what is missing is an agenda point addressing the opportunities that AI offers for data protection,” Marcus Belke added.
The CNIL will strengthen its control measures, particularly examining the use of generative AI, to ensure that companies developing, training, or using AI systems have taken appropriate measures to protect personal data. Their goal is to establish clear and protective rules for handling personal data in AI systems.
Through these comprehensive measures, the CNIL aims to contribute to the development of privacy-friendly AI systems while safeguarding the privacy and rights of European citizens.
With its action plan, the CNIL hopes to support Data Privacy Officers (Datenschutzbeauftragte) in tackling the challenges associated with Artificial Intelligence (AI). By providing clear guidelines and measures to ensure data protection in AI systems, Data Privacy Officers in companies and organizations can be better prepared to address the impacts of AI technologies. The CNIL’s action plan provides a foundation for building a privacy-friendly framework for the use of AI and helps Data Privacy Officers implement effective data protection measures and safeguard the rights and freedoms of individuals. Therefore, the work of the CNIL is a valuable tool for Data Privacy Officers to address the data protection challenges related to AI and ensure the protection of personal data.
Data Privacy: EU Commission Adopts New Adequacy Decision for Secure EU-US Data FlowsNew Use Cases Covered By 2B Advice PrIME
2B Advice PrIME is pleased to announce several updates to its Privacy Management solution.Happy 20th anniversary to 2B Advice!
Today marks 20 years since Marcus Belke and Hajo Bickenbach started this amazing company on January 13th, 2003.