As the formal end of the AI Act negotiations approaches, dozens of digital rights organizations across Europe are concerned that their requests to EU lawmakers will not be addressed due to time constraints and a failure to convince political opponents.
The final round of trilogues (negotiations) is scheduled to take place on December 6 in Brussels. That’s when the European Parliament, the EU Commission and the Council will finally agree on some of the most crucial regulatory provisions, including partial bans on biometric recognition and the regulation of foundation models.
Biometric Recognition: one last appeal
Biometric recognition (or RBI, Remote Biometric Identification) is an AI-powered technology that allows users to identify (name, surname, personal information) individuals whose face appears in a photograph/surveillance camera footage or whose voice/fingerprint/biometric data of any kind has been collected and scanned.
While it may sound like pure dystopia, RBI is currently in use in several countries around the world, including the United States, China, South Korea, and Iran. The aim of the Don’t Spy EU campaign is to discourage the use of this invasive and often inaccurate technology, which can lead to bias, discrimination, and human rights violations, as demonstrated by past cases of RBI abuse.
Now, here is the position currently adopted by the EU (as per the latest updates). As reported by Euractiv , the co-rapporteurs’ latest proposals allow law enforcement agencies, in limited and well-defined cases, to use RBI in real time and retrospectively (on previously recorded footage) in public spaces (squares, crossroads, etc.); this second case having been added.
The co-rapporteurs also propose a wording to advance the debate on national security, which the Council wishes to exclude from the scope of application, by specifying that the regulation is «without prejudice to Member States’ competences» in military, defence or national security matters.
Lastly, with regard to Article 5, Brando Benifei (S&D) proposes retaining the Parliament’s approach to the ban on biometric categorization, while excluding commercial services from the scope.
As can be seen, there is no mention of a full ban on RBI, which was the initial proposal of many organizations in the field, including us. Time is running out, and it is very unlikely that this partial ban will see any changes.
15 organizations, including Amnesty International and EDRi (European Digital Rights coalition), have signed an open letter to legislators condemning the partial ban on RBI. Here is the full text .
Academics against «codes of conduct» for foundation models
Foundation models are AI neural networks trained on massive datasets to handle a wide variety of tasks (examples of FM are: GPT-4, PaLM 2, Lambda…). The AI Act was also meant to provide a detailed regulatory framework for this type of technology, also deemed the backbone of Artificial Intelligence.
However, not all European countries are comfortable with the EU interfering with corporate powers, especially companies that are often directly tied to national governments. This is the case of France, Germany and Italy, whose governments proposed «codes of conduct» to regulate foundation models. In other words, they would rather rely on companies self-regulation principles.
In this case as well, an open letter was sent to legislators to ensure they understand the risks that corporate self-regulation of AI foundation models would imply. 16, among professors and academics have signed the letter. Here’s the full text .
Next appointments
The first part of a new compromise document on foundation models and other outstanding issues has been submitted to the States’ deputy ambassadors (Coreper) on November 29.
The framework for will be addressed in a second compromise document, to be discussed by Coreper on December 1.