On October 30, the Biden administration issued the AI Executive Order, a federal attempt at regulating AI in (temporary) absence of a dedicated law.
This would be the first time the US government decides to openly address AI risks and opportunities at the national level, even though AI laws have already been proposed, passed or come into effect in several States (State-level legislation), starting with the California Privacy Regulation Act, passed in November 2020.
The AI Executive Order passed just months before the tentative deadline for the completion of the European AI Act’s negotiations, set to come to an end in December 2023.
What are the AI Executive Order’s key points?
According the White House, key aspects of the Order include:
- Testing for companies. It mandates safety assessments for AI systems that pose significant risks to public safety, national security, or economic stability.
- Labeling. It directs new watermarking tools to clearly label AI-generated content.
- Anti-discrimination (?). It directs agencies to develop guidelines for preventing and mitigating bias and discrimination in AI systems.
- Investing in AI Workforce. It calls for supporting programs to equip workers with the skills needed for the age of AI and attracting global AI talent to the United States.
- Fostering AI Research and Development. It supports research on AI’s impact on society and the economy, including its potential effects on jobs and labor markets.
There is definitely a strong focus on innovation and advancement in the field of AI, as can be noted. However, some critics believe that there are not enough points enforcing protection when it comes to US citizens’ personal data. Which sounds credible, given historical precedents (remember the NSA mass surveillance scandal in 2013?).
The main concerns
The MIT Technology Review has commented that «although the Order advances the voluntary requirements for AI policy that the White House set back in August, it lacks specifics on how the rules will be enforced» .
Most concerns naturally stem from potential collaboration between tech-giants and the US government in the field of research and development of AI systems (such as foundation models, which needs to constantly access large amounts of data in order to function properly).
According to this article that recently appeared on The Markup, the biggest concerns about the AI Executive Order are:
- Lack of transparency and accountability. The text doesn’t do enough to ensure that AI systems are transparent and accountable.
- Insufficient focus on equity and civil rights. It doesn’t adequately address the potential for AI to exacerbate existing inequities and civil rights violations.
- Weak safeguards against misuse. The order does not provide strong enough safeguards against the misuse of AI. This could lead to AI being used for harmful purposes such as surveillance, social control, and warfare.
- Lack of coordination across government: The order does not do enough to coordinate AI policy across different government agencies. This could lead to duplication of effort and a lack of coherence in AI policy.
What about Face Recognition?
The agency that will be in charge of certifying and testing new AI tools is the National Institute of Standards and Technology.
The Markup remarks that «there is precedent for NIST involvement with emerging software technology. The agency maintains several tools to evaluate facial recognition applications, including NIST’s “Face Recognition Vendor Testing Program,” established in 2000. NIST also publishes training datasets for facial recognition, including one consisting of mugshots that contained 175 photos of minors».
That does NOT sound reassuring at all. Let’s also keep in mind that in the United States the legality of biometric face recognition in public spaces in the US is a complex issue.
- There is no federal law that explicitly prohibits or permits the use of biometric face recognition in public spaces. However, there are a number of State and local laws that regulate the use of this technology. Some states, such as California and Illinois, have passed laws that prohibit the use of biometric face recognition without the consent of the individual. Other states, such as Texas and Florida, have passed laws that allow the use of biometric face recognition but require law enforcement agencies to obtain a warrant before using the technology.
- There is no federal law that explicitly prohibits the collection or use of biometric data without consent. However, there are a number of state laws that address this issue. For example, Illinois’ Biometric Information Privacy Act (BIPA) requires companies to obtain written consent from individuals before collecting their biometric data, such as facial scans or fingerprints. BIPA also prohibits companies from selling or profiting from biometric data without consent.
There are also a number of cities in the United States that have banned the use of face recognition technology in public spaces. For example, San Francisco and Oakland, California, have both banned the use of face recognition technology by city agencies.