Don't Spy EU


Biden’s AI Executive Order and what it says about face recognition

It’s the first attempt of the United States at regulating AI at the federal level. Some of its key points seem to go in the right direction, but there are still concerns.

On October 30, the Biden administration issued the AI Executive Order, a federal attempt at regulating AI in (temporary) absence of a dedicated law.

This would be the first time the US government decides to openly address AI risks and opportunities at the national level, even though AI laws have already been proposed, passed or come into effect in several States (State-level legislation), starting with the California Privacy Regulation Act, passed in November 2020.

The AI Executive Order passed just months before the tentative deadline for the completion of the European AI Act’s negotiations, set to come to an end in December 2023.

What are the AI Executive Order’s key points?

According the White House, key aspects of the Order include:

There is definitely a strong focus on innovation and advancement in the field of AI, as can be noted. However, some critics believe that there are not enough points enforcing protection when it comes to US citizens’ personal data. Which sounds credible, given historical precedents (remember the NSA mass surveillance scandal in 2013?).

The main concerns

The MIT Technology Review has commented that «although the Order advances the voluntary requirements for AI policy that the White House set back in August, it lacks specifics on how the rules will be enforced» .

Most concerns naturally stem from potential collaboration between tech-giants and the US government in the field of research and development of AI systems (such as foundation models, which needs to constantly access large amounts of data in order to function properly).

According to this article that recently appeared on The Markup, the biggest concerns about the AI Executive Order are:

What about Face Recognition?

The agency that will be in charge of certifying and testing new AI tools is the National Institute of Standards and Technology.

The Markup remarks that «there is precedent for NIST involvement with emerging software technology. The agency maintains several tools to evaluate facial recognition applications, including NIST’s “Face Recognition Vendor Testing Program,” established in 2000. NIST also publishes training datasets for facial recognition, including one consisting of mugshots that contained 175 photos of minors».

That does NOT sound reassuring at all. Let’s also keep in mind that in the United States the legality of biometric face recognition in public spaces in the US is a complex issue.

There are also a number of cities in the United States that have banned the use of face recognition technology in public spaces. For example, San Francisco and Oakland, California, have both banned the use of face recognition technology by city agencies.

email icon github icon