Lors de chaque session plénière au Parlement européen à Strasbourg, Romain L’Hostis suit les débats du Parlement européen qui réunit les 705 députés européens des 27 pays de l’Union. Le 17 juin, il reçoit le député européenne italien Brando Benifei du groupe de l'Alliance progressiste des socialistes et démocrates au Parlement européen, sur l'adoption de la position officielle du Parlement européen concernant les futures règles qui s'appliqueront aux dispositifs et usages de l'intelligence artificielle (IA).
At each plenary session of the European Parliament in Strasbourg, Romain L'Hostis follows the debates of the 705 MEPs from the 27 countries of the Union. On June 17, he interviewed Italian MEP Brando Benifei of the Progressive Alliance of Socialists and Democrats in the European Parliament, on the adoption of the European Parliament's official position on the future rules that will apply to artificial intelligence (AI) devices and uses.
Brando Benifei, you were rapporteur on this Artificial Intelligence Act. The result is quite impressive : a big majority almost five hundred votes in favor. What’s your impression following this vote ?
I think it was really an historic vote for the European Parliament because for the first time the European Union is promoting a horizontal legislation, first time in the world, that is dealing with the risks that the introduction of artificial intelligence is bringing on the table for our societies. So the objective of this regulation is to sustain a strong intake of the AI, the diffusion of the AI, the use of the AI, but with the risk management system that can mitigate risk, and this way build trust with these technologies. And so, people can feel like that the institutions are taking care of unnecessary risks and they feel and that the objective of the regulation, they can feel protected, they can feel that their rights are not endangered. And this way, we have chosen to identify AI risk systems, starting from the areas we think are high-risk. To identify which systems can be defined high-risk and will need to go under a conformity assessment : with the data governance, the human oversight, the technical requirement etc.
Can you give us some examples of areas of AI risk ?
The areas that have been identified as risk are many, but I can give you examples : one is the development of children, human growth, health, employment and work places, critical infrastructures, democracy, and so on. There are various areas where we identified a potential AI risk.
Based on what criteria has been set this scale of high-risk or less risky sectors ?
We identified areas where the risk is probably higher because of the goods and the values that are at stakes. For example, health clearly is one fundamental issue for the well-being of people, so AI in health could create huge disasters. Work place also is a very delicate situation, where people could be discriminated by AI, with this discriminatory data, and so on. So, we identified areas of potential AI risk, and then the developers will have to look : if they think their system, although is under one of the categories of the high-risk is not posing a significant risk, they can ask for an exception and they will go to national authorities. Otherwise, they will need to go through conformity assessment of the kind I mentioned. So, there is all the efforts in place to reduce risks to health, safety, fundamental rights that the system could imply.
How to prevent a restriction on the technological progress ? How not to handicap the innovation in Europe ?
I think we have done a very precise job, in the sense that we have identified the areas of potential AI risk, where we see there is a need for a conformity assessment. Then, we also identified prohibited uses, uses that are too risky and we don’t want them to be in place in Europe, like predicted policing, biometric identification in real-time in public spaces, emotion recognition in work place, in school and at the borders. Everything that is not in these categories will be a low-risk AI, and will be very lightly regulated, only with general principles. This is because we really want to have a strong regulation for a specific number of AI systems that bear high-risk or even an excessive risk so that they should be prohibited. This is only a limited part of the AI ecosystem, but we want people to know that when there is an AI system that is interacting with their lives through business, local authorities or institution, that is using AI, they know that we have done all the procedures to mitigate all the risks as much as possible.
These new technologies are still in progress, some of them don’t exist yet. But already, many citizens fear that AI might be a source of manipulation of our next democratic elections in Europe. What’s your opinion on this ?
I think the risk absolutely there, but we are acting so that we can also build a situation of trust by people. We want people not to be scared by AI, but assured that we can governed the phenomenon, we can give rules. That’s also why we give rules in this text the European Parliament has approved, and that now will start its negociation with the governments, we have put clear indications for transparency regarding AI. We have put clear indications for making available all the content produced by generative AI, and also the deepfake, the fake images, audios etc. that we want it to be clear that they are not real. And they are depicting or making people say or do things that they haven’t done. Because if we cannot control this, we cannot put rules on this, we will create a situation where truth and false will be impossible to distinguish. And lastly we have put transparency requirement for materials used for training AI that are protected by copyright. We want this to be public, so that authors if they feel that the copyright rules has been infringed, can ask for compensation.
Given that these new rules will not be operational before the next European elections, is that a problem ? What can we do in the meantime ?
I think it will be crucial to enact the so-called AI pact, proposed by the European Commission. That is to ask the businesses that are operating in Europe to gradually apply the provisions of the AI act even before it comes into force. This means we could in this framework push the developers of the AI to anticipate some measures before they are mandatory. Through this we can push for more transparency and reduce the risk for desinformation, also before the law enters into force. It can be also well supported by saying that anyway this will become mandatory. It could be very important to anticipate its effects.
In its current state, can the legislative text still evolve ?
In my opinion, the negotiations will be tough. Because there are differences. For example, on the bans. The European Parliament have strengthened the bans for security driven technologies that we think are not in fact giving more security, but more that the governments who want biometric identification cameras in real time, they want to do emotion recognition. We don’t think this will increase security, the Parliament has a strong majority on this. But the governments are divided, so on the bans we will have a big debate. Also on the transparency requirements I mentioned, the governments didn’t confront it, because they concluded the text in december, before the issue regarding AI became so important in the debate. So we need to confront thanks to the European Parliament position. But the governments, I didn’t find them very problematic, because many are supportive of the constructive approach on transparency on generative AI of the European Parliament. But the stakes are huge, so I cannot exclude a strong lobbying by various interests also on the government, that might change their attitude, and also on this generative AI rules, there will be a significant negotiations to do.
[...]
Interview realised by Romain L'Hostis.