The European project to regulate artificial intelligence passed a crucial milestone Thursday by getting its first green light from members of the European Parliament who called for new restrictions and better consideration of the GPT chat programme.
The European Union seeks to be the first in the world to adopt a comprehensive legal framework to curb the excesses of artificial intelligence (AI) while ensuring innovation.

Brussels proposed an ambitious bill two years ago, but its lesson has been delayed, and has been delayed in recent months by controversy over the dangers of generative artificial intelligence capable of creating text or images.

EU member states set their position only at the end of 2022. European MPs announced their position during a vote in a committee on Thursday morning in Strasbourg that will be confirmed during a plenary session in June.

Tough negotiations will then begin between the different institutions. “We received more than 3,000 edits,” said Dragos Todorac, co-rapporteur of the text. It is enough to turn on the TV, every day we see the importance of this file for citizens.

“Europe wants a human-based, ethical approach,” said Brando Beneve, co-rapporteur. AI systems are as interesting as they are concerned, because of their incredibly complex technology. If these regimes are able to save lives by making a quantum leap in diagnosing diseases, they are also exploited by authoritarian regimes to exercise mass surveillance of citizens.

After its discovery at the end of last year, the ChatGBT program sparked widespread interest in the world in generative artificial intelligence, due to its ability to create elaborate texts such as emails, articles, poems, informational programs, or translations, in just seconds.

But the posting of fake images on social media, more real than the real ones, generated by apps such as Midjourney, has warned of the dangers of manipulating public opinion.

Scientific figures have also called for suspending the development of stronger systems, pending the issuance of a law to better regulate them. The Parliament’s position in its broad outlines confirms the Commission’s approach.

The text builds on existing rules on product safety and will impose primarily company-based controls. At the heart of the project is a list of rules imposed only on applications that companies consider themselves to be “high risk” based on the standards of the legislator.

For the European Commission it will concern all the systems used in sensitive areas such as critical infrastructures, education, human resources, law enforcement or immigration management… Among the obligations: check human-machine control, create technical documentation or even create a risk management system.

Compliance with these rules will be monitored by the designated supervisory authorities in each Member State. MEPs want commitments to be limited to products that may threaten security, health or fundamental rights.

The European Parliament also intends to take better account of generative AI systems such as ChatGPT by requiring a specific regime of commitments such as those under high stakes regimes.

European MPs also want to compel service providers to protect against illegal content and disclosure of copyright-protected data (scientific texts, music, images, etc.) used to develop their algorithms.

The Commission’s proposal, unveiled in April 2021, provides for a framework for artificial intelligence systems that interact with humans. It would require them to inform the user that they are in contact with a machine and would force applications creating images to specify that they were created artificially.

Bans would be rare, and would affect apps contrary to European values ​​such as the citizen profiling or mass surveillance systems used in China.

MEPs want to add a ban on emotion recognition systems and remove exceptions that allow remote biometric identification of people in public places by security forces. They also intend to prohibit the collection of images in large numbers on the Internet to create algorithms without the consent of the persons concerned.

Share.
Leave A Reply

Exit mobile version