European institutions have reached an agreement on the regulation of artificial intelligence (AI), including generative AI. Celebrated as a historic milestone and of great importance for European society and economy, it is an unprecedented step towards fostering the development of safe and reliable Artificial Intelligence by all actors, public and private. “This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in this field,” said the European Parliament.
At the same time, agreements are also being reached to advance the governance of Artificial Intelligence, whether through principles and codes of conduct, such as the Hiroshima process, the Biden Executive Order, the OECD update, UNESCO’s Ethical AI Recommendation, the Bletchley declaration, or the Council of Europe’s AI Convention due to be unveiled in 2024. All of these initiatives, seek to mitigate the risks inherent in the design and development of AI, including generative AI.
What does the regulation include?
The AI Act is based on a risk-based approach. The higher the risk of the AI system, the stricter the obligations. There are even use cases that are directly prohibited. For systems that are classified as high-risk and predefined as such in the Law, some obligations are stablished called the “minimum requirements” that have to be complied before putting the system into the market and once the system is being commercialized. For other AI systems, which are not classified as high risk, voluntary measures apply. The agreement reached seems not apply to AI systems provided under free and open source licences unless they are high-risk AI systems or under other exceptions. In addition, this regulation shall not apply to AI models that are made accessible to the public under a free and open-source licence whose parameters, are made publicly available, except for some obligations.
Different types of actors are distinguished in the value chain: AI system suppliers, importers/distributors and system users, each with different responsibilities. The most onerous obligations are imposed on the IA providers.
One of the most important events since the adoption of the European Commission’s proposal, has been the emergence of foundational models and general purpose AI systems capable of generating new content (i.e. GPT), which caused great social alarm. The AI Act finally includes the regulation of Generative AI (GPAI, general purpose AI) and this has led to a heated debate on whether the technology should be regulated, regardless of its risk. In the end, the compromise reached was to regulate the models and systems of GPAI, but in principle to impose transparency and cooperation obligations on the providers of these systems and models, so that users have sufficient information to be able to comply with the requirements of the regulation.
General purpose AI models are classified as systemic risk if they have high impact capabilities or if the European Office of AI decides, on its own initiative, or following a qualified warning from a scientific panel, that the model has equivalent capabilities or impact. A model is presumed to have high impact capabilities if, among other things, the cumulative amount of computation used to train it (measured in FLOPs) is greater than 10^25 or is used by a certain number of clients. The European Commission will have the power to adjust the thresholds and add indicators and benchmarks in line with technological developments.
Providers of generative AI must maintain the technical documentation of the model, provide information to other providers who integrate the model, respect EU copyright law and publish a detailed summary of the content used for training, and cooperate with the Commission and national authorities.
Providers of models with systemic risk are also required to conduct assessments according to standardised protocols, assess and mitigate systemic risk at EU level, maintain serious incident reports, conduct adversarial testing and ensure an adequate level of cyber protection. The development of codes of conduct at EU level is encouraged and facilitated to contribute to the proper implementation of the regulation.
Prohibited uses and exemptions for AI
Another major debate has been the expansion of prohibited uses, including emotion recognition in work and education, and the prediction of individual crimes. However, exceptions have been made for specific situations, such as finding victims of crime.
In addition, the fines for non-compliance have been relaxed: 7% of total revenues or €35 million (whichever is higher) for placing prohibited uses on the market; 3% or €15 million for breaching other obligations and 1.5% or €7.5 million for providing incorrect information. Special treatment will be given to SMEs.
On the other hand, issues related to national security, military and defence applications, AI systems used exclusively for research and innovation, and the non-professional use of AI, are excluded from the regulation and governed by specific rules.
Impact on AI innovation
Some believe that the EU new Artificial Intelligence regulation is a case of over-regulation that could stifle technological innovation, particularly with regard to the regulatory treatment of generative AI. Many of these voices argue that there is no “Big Tech” in Europe, only in the US where (self-) regulation is left to the companies themselves. And this could lead to a competitive disadvantage for European companies.
Others, however, argue that it is a myth that AI regulation works against innovation. It is true that there are elements that support this hypothesis. First, the inclusion of regulatory sandboxes, i.e. real-world testing and open sourcing, can facilitate innovation. And secondly, the regulation provides with a clear standard and a defined governance model, ensuring legal certainty, which is extremely important for businesses.
In this context, it remains crucial the promotion of innovation with public policies that promote and encourage investment in innovative projects and ecosystems, foster a culture of entrepreneurship, attract talent and research projects with a vocation for real-world application.
It is not about finding a balance between innovation and regulation, but about responsible innovation by design. This regulation obliges actors in high-risk AI systems to assess potential negative impacts in advance, rather than aftermarket launch. It is much easier and cheaper to prevent or mitigate potential negative impacts at the outset, before major investments are made, rather than aftermarket launch (“break and fix”).