“Science yourself! “organized by CEA and CENTQUATRE-PARIS.
What is your definition of artificial intelligence?
Francois Terrier : According to the OECD, artificial intelligence (AI) is a set of techniques that allow a machine to perform tasks usually reserved for humans. The term “usually” is interesting because it implies that the perception of AI can evolve over time. This definition introduces the notion of‘Specific AI (or weak AI) which addresses a problem for which we try to surpass the capabilities of the human being in terms of speed, endurance. It leads to another concept, that ofGeneralist AI (strong AI) based on the myth of the system endowed with human, functional and emotional qualities. To return to the concrete, one of the specific technologies of AI is machine learning (or deep learning) which consists in correlating the data entered in a system with the tasks it will have to perform. If I told you automatic correlation instead of AI, you would ask yourself fewer questions!
Raphael Granier de Cassagnac : In my novels I make the same distinction, naming weak AI “artificial intelligence” and strong AI “artificial consciousness”. The latter traces human behavior in all its variety and complexity. “My” scientists develop this consciousness by reproducing the functioning of the brain in silicon; while AI is rather a software designed for specific tasks that assists humanity. I also really like the temporal relativity mentioned by François because, at the time, we could have considered a calculator as an AI, but no more!
In your opinion, will AI be able to match the human or even take power like many science fiction scenarios?
Raphael Granier de Cassagnac : In science fiction, strong AI tends to turn against its designer, who conveys fear fantasies. In Stanley Kubrick’s 2001 film A Space Odyssey or, even earlier, Jean-Luc Godart’s Alphaville, the machine is afraid of being disconnected, like humans and their fear of death. Today I notice an optimistic change: in Spike Jonze’s film Her, the AI works harmoniously with the human. It is multiple, redundant and since it is in the cloud and no longer in a computer, it no longer fears death! But that remains fiction because I doubt that in fifteen years we will be able to develop a
Francois Terrier : I totally agree because I find it difficult to consider that intelligence is only calculation and rationality. What to say
cognitive, emotional and psychic aspects ? Of course, technology allows you to give the illusion of a human machine, as long as you are in videoconference and with encrypted sound! Even the best chatbots (conversational robots on the internet) do not last long: if the discussion continues, we realize that the AI has not understood the first questions well, that it does not take into account their semantics or their meaning.
Why is AI arousing your interest?
Francois Terrier : AI developed for the Internet doesn’t interest me much. But
systems designed for industry or rare disease research they are much more motivating: AI on complicated issues, which large groups have not developed precisely because it is complicated, is an exciting scientific challenge.
Why is bias in AI crucial?
Francois Terrier : In a learning-based AI, an algorithm is programmed to correlate the input data with an execution by the system. But the data is not introduced in the raw state or by accident. They are formatted and annotated, meaning that the human describes what is there. It is not the machine that invents the concept of a cat – found in an image – if it is not said that there is a cat. This annotation step induces
the risk of introducing prejudices. For example, the Dutch state had set up a system to detect fraud in social assistance. After a few months, he had to backtrack following an avalanche of lawsuits because the AI had created a
statistical biasAnd towards foreigners, for having emphasized a particularity of the data rather than all the criteria envisaged. This is why it is essential to qualify systems before exploiting them. It involves checking all records, analyzing what has been learned, detecting unexpected phenomena and major trends to decide if it is appropriate. It is a cutting-edge science.
Raphael Granier de Cassagnac : These testing cycles, which consistently ensure that observed biases remain ethical, are truly crucial. Another key point is to know what it means to grant AI and to provide a big red button that allows the human to stop the car at any time. Take the case of
autonomous vehicle and the decision his AI would make in the event of an accident: turn left and die against a plane tree or turn right and mow down a cyclist? Who will be responsible?
How to frame AI ethically and legally? What are the risks in the absence of a framework?
Francois Terrier : We are fortunate that Europe has faced this problem. It started with an ethical reflection and today results in regulations,
AI lawaccording to which theresponsibility lies with producers and users. And this, even if at times large groups have fought for only the AI to be responsible (in other words nobody!). Parliament considers it necessary to qualify the technology, the algorithm but also the uses and potential risks, which are up to man.
CEA, trust in AI is a key issue. Already in 2017 we understood that beyond its use in research, AI would end up in industrial systems and require safeguards. This is why we have launched a major program in this area. Just like we have a digital ethics committee. It is true that Europe lags behind the United States and China in terms of the volume of data acquired and available. But she is at the forefront of these
ethical and trust issues. In particular, it prohibits any AI not qualified for high-risk uses from entering the European market. With, in subtext, the obvious interests of
Raphael Granier de Cassagnac : What could become troubling is if big AI-engineering companies, with colossal volumes of personal data and enormous financial power, seize political power. In one of my novels, I put forward the idea that “corporations” have their own country, their own militia … Shortly after writing it, I was surprised to read that Larry Page, co-founder of Google, asked for a “territory to experiment with new forms of governance.” But I remain optimistic to see that citizens are taking control of this debate and can influence it, like the European position and even a reawakening of real consciences!
An excerpted article
of the CEA Challenges n. 250