Increasingly AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. Such systems disproportionately target already marginalised communities, undermine legal and procedural rights, as well as contribute to mass surveillance. When AI systems are deployed in the context of law enforcement, security and migration control, there is an even greater risk of harm, and violations of fundamental rights and the rule of law. To maintain public oversight and prevent harm, the EU AI Act must include:
-
A full ban on real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception;
-
A prohibition of all forms of predictive and profiling systems in law enforcement and criminal justice (including systems which focus on and target individuals, groups and locations or areas);
-
Prohibitions on AI in migration contexts to make individual risk assessments and profiles based on personal and sensitive data, and predictive analytics systems when used to interdict, curtail and prevent migration;
-
A prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected attributes as well as the use of any biometric categorisation and automated behavioural detection systems in publicly accessible spaces;
-
A ban on the use of emotion recognition systems to infer people’s emotions and mental states;
-
Reject the Council’s addition of a blanket exemption from the AI Act of AI systems developed or used for national security purposes;
-
Remove exceptions and loopholes for law enforcement and migration control introduced by the Council;
-
Ensuring public transparency as to what, when and how public actors deploy high-risk AI in areas of law enforcement and migration control, avoiding any exemption to the obligation to register high-risk uses into the EU AI database.
Read the full policy recommendation document (EDRI'23).