While Hermes Center for Transparency and Digital Human Rights, along with fellow partners The Good Lobby and info.nodes, are happy to learn that the European Parliament, Commission and Council have finally reached an agreement on the AI Act, they believe that some important aspects still need to be properly tackled by the legislation.
The deal was reached on December 8 in Brussels, after 36 hours of negotiations (trilogues). Some EU politicians and mediators who took part in the negotiations of the AI Act – the very first law aimed at regulating AI at the continental level – have stated that there is nothing to be worried about: the law will ensure fundamental rights are protected from the potential harms of artificial intelligence.
However, judging by the outcome of this last trilogue, it does not appear to be this way. Here’s a few bureaucratic loopholes that EDRi (European Digital Rights network) have enlisted , that could pose a major threat to human rights of EU (and non-EU) citizens:
-
RBI (remote biometric identification). The deal does not include a full ban on biometric recognition in public spaces, despite the incredible efforts of several campaigns run by civil rights organizations over the past years – from Reclaim Your Face to Don’t Spy EU. Exceptions are allowed for the search for certain victims, suspects, and for the prevention of terror attacks. These are all very broad scenarios, and as of now there is no way to predict how law enforcement will in fact employ these AI systems (that should have been banned in the first place). Employment of retrospective RBI is limited to cases of “serious crimes”, although the definition of “serious” is not clear.
-
AI systems employed in law enforcement and migration contexts. There are no specifics as to how “high-risk” systems will be used in these contexts.
-
Partial bans on dangerous practices. We know some practices are prohibited – such as emotion recognition, banned on the workplace and in educational contexts – but… their ban doesn’t always apply to all contexts (law enforcement and migration, in this specific cases, are extent from the ban). Among the practices partially banned are also predictive policing, biometric categorization, and social scoring.
-
Self-assessment. Giving big-tech companies the freedom (even if partial, in some cases) of deciding whether the AI technology they develop is “high-risk” or not represents a failure of the public scrutiny system.
While it is true that a deal has been reached, the AI Act’s text is yet to be finalized. That means we can still push for a drafting that is compliant with democratic principles and values.
We, as civil society nonprofit organizations, will continue to fight for digital rights (including the right to privacy and transparency), against mass-surveillance, discrimination and manipulation/abuse of AI technology. The future does not look too grim, after all.