This “novelty” within the CNIL had been in the cards for almost two years, but it particularly resonates with the effervescence around ChatGPT. In July 2021, the National Commission for Computing and Freedoms, like its European counterparts, welcomed future European legislation on artificial intelligence.
A long-awaited announcement
The CNIL therefore applied to be designated as the national AI supervisory authority. Noting “the very strong adherence between the regulation of AI systems [à venir] and that of data, in particular of personal data”, Council of State supportedon August 30, a “profound transformation of the CNIL” so that it can play this role.
The creation of the Artificial Intelligence Service (SIA) probably doesn’t bring about a real transformation, but it reaffirms it position of the Commission. The service will bring together five people, “lawyers and specialized engineers”. The SIA will be placed under the authority of the department of technology and innovation, embodied by Bertrand Pailhès, former coordinator of artificial intelligence strategy within DINSIC.
The CNIL’s Artificial Intelligence Department should promote understanding of AI systems internally, but also among the general public and professionals. His role will be to strengthen his skills “in the knowledge and prevention of privacy risks associated with the implementation of these systems”.
The CNIL provides that the SIA has an “interfunctional role” within it. He will be called upon to collaborate with the department responsible for legal support to produce benchmarks and recommendations at the request of the government or to disseminate this knowledge to public and private actors.
It is already foreseen that the artificial intelligence service will assist “in the investigation of complaints and in the adoption of corrective measures” when “violations” of French and European regulations concern the use of artificial intelligence systems.
Just as some companies don’t wait for an AI-related law to be enforced a anticipating the highlights on which the authorities could wait for them in turn, the CNIL wishes to publicly show its appetite in this sector.
During a round table of the AI France Summit, Bertrand Pailhès said that the Commission had already taken “doctrinal positions” regarding some AI projects carried out by French public authorities.
The ASI will therefore have to develop “relationships with ecosystem players” in preparation for the entry into force of the legislation on AI (commonly known as the AI Act), currently being developed at European level.
AI Act: “grey areas” to be removed
As a reminder, the AI Act aims to authorize, prohibit and regulate all AI systems based on risk levels.
During the famous round table, the director of technologies and innovation of the CNIL also anticipated the address for solution providers and discussions with the “actors” concerned. “The notion of actors introduced by the European AI legislation concerns suppliers, distributors, importers, or even users”, explains Yann Bilissor, CTO Data & IA of Cellenza, an IT consultancy company expert in Microsoft technologies. “Users are not users, but direct or indirect customers of technology providers such as Microsoft, Google or AWS”.
However, the texts currently available present “gray areas”, observes the CTO, which in turn refers to the opinion of the Smalt law firm, a partner of Cellenza. “We rely on solutions from a cloud service provider to help our customers develop AI projects. However, the level of responsibility between the supplier and the integrator is not yet clearly defined in the texts. For the time being, we can be seen as AI service providers,” notes Yann Bilissor.
Yann BilissorCTO, Data & AI, Cellenza
Deciding the different levels of responsibility is a crucial issue for all companies. The draft regulations already provide for fines of up to 30 million euros or 6% of a company’s total turnover, the CTO insists.
Another question for Yann Bilissor concerns the compatibility of future European regulations with the current General Data Protection Regulation (GDPR)). “The new text requires the input and output data of an artificial intelligence system to be recorded,” he explains. “Let’s imagine that the input data are photographs of faces from which an algorithm of computer vision detects skin abnormalities (acne, psoriasis, eczema, etc.) to offer appropriate treatment,” he explains. “By itself, the use seems insensitive, but under the GDPR, we don’t necessarily have the right to keep these photos which are personal data. How to keep input data without violating the GDPR? He wonders.
“Works” to guide public and private actors
It is precisely to respond to this type of organization that the CNIL is launching, in parallel with the creation of the SIA, work on “learning databases”, more commonly known as training data sets. The National Commission for Information Technology and Liberties says it has received requests for clarification from companies. It wants to “promote good practices according to the requirements of the GDPR” and prepare the ground for future European regulations.
This work is about systems training machine learning and of deep learning which requires the collection of data from “all kinds of sources”. The CNIL wants to advise private and public actors on the constitution of training games, the stages of development and the use of artificial intelligence models. During 2023, it will publish data collection tools and practical sheets “to respond to the most common situations”. However, the authority has not yet launched any projects regarding the dissemination and re-use of AI datasets and models. However, this is one of the most popular use cases in enterprises and research labs. These questions “will be the subject of a separate work”, promises the CNIL.