As the negotiations surrounding the AI Act draw to a close, dozens of nonprofit organizations across Europe – including us behind Don’t Spy EU – maintain that several ethical issues around certain types of AI systems and technologies remain, to this day, unresolved.
These issues include exemptions granted to law enforcement for the use of live RBI (Remote Biometric Identification), powered by AI, and corporate interference in regulating foundation models, among others.
While the next political trilogue (=round of negotiations) is scheduled for December 6, the next technical meeting with representatives of the three institutions (European Parliament, Commission and Council) is today, Friday, November 24.
If the issues already mentioned are not adequately tackled in the next couple of weeks, European citizens may face the possibility of a dystopian AI Act’s draft being approved.
Unregulated foundation models: an emerging threat
Foundation models are the backbone of artificial intelligence. They are machine learning models that, unlike earlier models, are not designed and trained to solve one specific problem or perform one specific task but can be used for several applications (general-purpose AI).
This is possible only through collecting extensive amounts of data, which are then “fed” into the foundation model.
Examples of foundation models are GPT-4 for Chat-GPT, PaLM 2 for Bard, LaMDA for Microsoft, and a few others. They are among the most powerful neural networks, because they’re being trained on some of the largest and most diverse datasets in the world.
But they are not the only ones, and many other companies are looking into creating their own – this Open AI’s feature further complicates the landscape, as it empowers users to build their own foundation models, democratizing the technology to an unprecedented degree.
Now, providing a regulatory framework for these ever-evolving artificial neural networks is crucial to prevent or eventually limit their exploitation and manipulation, often driven by profit or control motives. The AI Act should serve just this very purpose…
However, negotiators have not agreed on a definitive plan yet. MEP Dragoş Tudorache has recently reiterated the need for a comprehensive approach that addresses the unique characteristics of these models, and MEP Brando Benifei has emphasized the importance of maintaining parliamentary oversight to ensure accountability – rather than delegating key decision-making processes to the EU’s AI Office.
France, Germany and Italy’s proposal
Meanwhile, France, Germany and Italy have proposed an alternative approach to regulating foundation models. According to a document leaked by Politico, they advocate for self-regulation through «codes of conduct» aligned with international principles («at the G7 level»).
This proposal very much reflects the ongoing debate between prescriptive regulations and industry-led initiatives. Who else approves of the proposal?
Business federations Medef (France) and BDI (Germany) have supported the proposal, calling for a focus on general-purpose AI systems and their high-risk applications.
According to the letter they issued for some legislators, they believe that foundation model providers should only be subject to basic transparency obligations, and express concern that overly strict rules could hinder innovation and compliance.
They also express concerns about revealing how they have used their protected training data, as they argue that doing so could jeopardize their trade secrets.
Let’s persuade the Italian government to drop the proposal!
Upon learning of Italy’s involvement in the foundation models proposal, we came up with a few questions. Why would the Italian government align with the position of Germany and France, two countries deeply concerned with protecting national interests and their big-tech corporations?
In Italy, many open-source software companies currently reside and operate, and the government should protect them by proposing a model of sustainable, ethical and democratic technological development instead.
Along with fellow nonprofit organizations The Good Lobby, Privacy Network, StraLI, Period Think Tank and Gender & Policy Insights, we have sent a letter to the representatives of the Italian government and Parliament, as well as to the Permanent Representatives of Italy within the EU.
All these political figures may influence the current position against the regulation of foundation models taken by the Italian government at the EU Council. We demand that our politicians take concrete action against digital injustice, so that our country is not complicit in committing this mistake.
In addition, we contacted the entire Italian Artificial Intelligence Network, a movement comprising of university professors, professional associations, and other key-figures of the AI field, to have them join us in spreading the latest updates and raising awareness among Italian citizens.