Don't Spy EU

D

How the AI Act can prevent the spread of deepfakes

Two scenarios in which deepfakes play a pivotal role

Credits: The cover picture via Nicolas Livanis on X.

Scenario #1 – The AI Act, as the first European body of legislation of its kind, will regulate, among other AI technologies, remote biometric identification (RBI), including facial biometric recognition. By imposing maximum restrictions on RBI, the law will consequently affect deepfakes production, as the latter functions thanks to RBI software.

Scenario #2 – If the AI Act does allow facial biometric recognition, not only will deepfakes remain untouched by this law, but in certain cases their creation would also become a necessary act of defense of our digital profile.



What should be protected and why

Facial biometry is more than just a technological advancement. It’s the distinctive fingerprint of every individual. Unlike “traditional” prints that require physical contact, an individual’s face can be captured, stored, and processed from a distance, often without the individual’s consent or even knowledge. As AI systems evolve, so do their capabilities, and when combined with facial biometry, they pave the way for potential misuse and surveillance dystopias.

The dangers aren’t just hypothetical. With the rise of Remote Biometric Identification (RBI), public authorities and private tech companies can remotely access vast digital databases to process and identify individuals in real-time, as seen with Clearview and PimEyes. However, the concern stretches beyond just surveillance. With the very essence of a person’s identity at play, it becomes a goldmine for malicious actors aiming to create deepfakes — hyper-realistic videos or images where the facial biometric data of the ‘victim’ is transposed onto another entity, blurring the lines between reality and fiction.

Given that the creation of deepfakes heavily relies on extracting facial biometric imprints, any unregulated spread of facial biometric processing tools puts personal identities at risk. These tools, initially developed for potentially beneficial purposes, can easily be appropriated and misused, driving harmful practices such as disinformation campaigns and personal attacks.

Furthermore, when such technologies become widely accessible and less regulated, even by public actors, the potential for abuse skyrockets. Consider a scenario where state surveillance isn’t just about tracking movement, but also about profiling emotions, predicting behaviors, and more. It’s a world where personal privacy is drastically eroded, and the sense of self is persistently under scrutiny.

To safeguard individual identities and maintain public trust, it’s crucial to heavily limit, if not fully ban, the use of facial biometry in AI systems, especially in publicly accessible spaces. This is not just about preventing the misuse of technology; it’s about preserving the very essence of individuality and freedom in a digital age. Only through rigorous regulations and prohibitions can we ensure that facial biometry is used responsibly and does not become a tool for harm.

What a by-policy protection looks like

While it’s true that policies might not completely eradicate deepfakes from the internet due to rogue actors operating in the shadows, the right regulatory frameworks can drastically curtail the proliferation and misuse of this technology.

By establishing stringent policies, we can ensure several layers of protection:

  1. Transparency and Accountability: With robust regulations in place, only those AI systems that comply with ethical standards and are transparent about their usage of facial biometry will be permitted. This demands that organizations and developers declare their intentions, methodologies, and data sources, making them accountable for any misuse.

  2. Licensing and Certification: By introducing a mandatory licensing system for AI tools that use facial biometrics, we can ensure that only vetted and responsible entities have access to such potent technologies. This would prevent the mass spread of these tools into unqualified hands.

  3. Technical Barriers: Establishing standards for how biometric data should be stored, processed, and accessed would create technical hurdles for malicious reuse. Data encryption, secure databases, and controlled access can minimize the risk of data breaches or unauthorized access.

  4. Auditing and Monitoring: Regular audits of licensed AI systems can ensure ongoing compliance with established norms. Any deviations or misuses can be quickly identified and rectified.

  5. Public Awareness: Policies can also mandate that AI developers and deployers educate the public about the risks and benefits of biometric systems. This can empower individuals to make informed decisions about their data and the technologies they interact with.

  6. Legal Repercussions: Strong policies will be backed by legal frameworks that penalize misuse, holding malicious actors accountable and deterring potential wrongdoers from exploiting the technology.

By taking a policy-driven approach, the goal isn’t to stifle innovation but to direct it responsibly. While it might be a tall order to eliminate every deepfake from the internet, a robust policy framework can ensure that the AI tools using facial biometry operate within a controlled, ethical, and transparent environment. This proactive stance might just be the best defense in preserving trust, privacy, and individual rights in an interconnected world.

What the surveillance lobbying isn’t considering

The landscape of surveillance technology is overwhelmingly driven by business interests and, more often than not, devoid of ethical considerations. Companies producing surveillance technology primarily operate with one mandate: to sell their products. Unfortunately, this means that the integrity of information ecosystems often takes a backseat. With no inherent incentive to protect or promote ethical use, it’s naive to expect an unbiased assessment of their technologies.

In the midst of such a landscape, the pivotal role of upholding ethical values becomes even more crucial. The six values we identified earlier — transparency and accountability, licensing and certification, technical barriers, auditing and monitoring, public awareness, and legal repercussions — aren’t just best practices; they’re necessities. If left unchecked, the surveillance tech industry could pave the way for an unprecedented erosion of individual privacy and civil liberties.

Another troubling narrative propagated by surveillance enthusiasts is the notion that tools like RBI are essential for solving crimes. This claim, however, doesn’t hold water. Evidence consistently shows that while RBI and other surveillance mechanisms may aid in some aspects, they are not the silver bullets they’re often made out to be. Instead, they often exploit human vulnerabilities, capitalizing on fears to push for more intrusive measures.

Rather than succumbing to these narratives, our collective focus should be on fostering a future where technology serves humanity, not the other way around. It’s a future where personal privacy isn’t traded for perceived security, where tech companies are held to account, and where society doesn’t sleepwalk into a surveillance dystopia. Setting and imposing ethical standards isn’t just about keeping surveillance companies in check, but about preserving the very essence of a free, democratic society.

Infographic

The scheme below explains that if biometric identification gets legitimized by the AI Act, implicitly weakening the current GDPR provisions, we will immediately become more vulnerable to tools that profile people.

A sad but necessary solution would be to “pollute” the Internet with deepfakes of ourselves, so that our profile would be associated with behaviors that are typically not ours, thus reducing the credibility of these services and anonymize our actual interests and identities.



email icon github icon