Generative artificial intelligence is the technology that will have the greatest impact on society and the economy in the coming years. The big difference between generative AI and more established types of AI is that it makes the leap from cognitive capabilities into the realm of creative capabilities. With generative AI, the machine produces new information rather than simply recognising, analysing or classifying existing content. In other words, it has the capacity to create new content.
Some studies estimate that it has the potential to generate between $2.6 billion and $4.4 billion annually to the global economy, which would be 15 to 40 per cent more than artificial intelligence. Its use will extend to all industrial sectors and will have a particular impact on banking, high-tech and research. It will also have a very significant impact on labour, improving productivity and contributing to annual growth of between 0.1 and 0.6 per cent by 2040.
Although it is in the early stages of its development and, above all, of its application, it is one of the technologies about which the private sector has generated the greatest expectations, due to its potential impact on improving productivity, growth and efficiency.
Hence, in recent months there has been a proliferation of proposals at global and regional level to improve the governance framework for artificial intelligence, including generative AI. The rapid evolution of this technology has shown that all of them must conform to flexible schemes, with rapid adaptability to innovation, while being compatible with the rights of individuals and offering legal certainty to all parties involved.
In this post, we will address the European Commission’s proposed AI Pact and the Hiroshima principles and codes of conduct that companies would voluntarily subscribe to.
European Pact for Artificial Intelligence
In the final phase of the negotiation of the Artificial Intelligence Act which includes Generative AI models, the European Commission presents the AI Pact on 16 November in Madrid in the framework of the AI Alliance event, “Leading Trustworthy AI Globally“.
The AI Pact is a call for companies to implement the AI Act by voluntarily committing themselves before its entry into force. In doing so, the EU aims to accelerate institutional processes, which are decoupled from the speed of innovation and use of technology, to generate a regulatory “standard” for AI on a global scale, including generative AI.
It is certainly crucial to strengthen cooperation with companies that continue to demonstrate their commitment to responsible AI; but to engage in such an initiative, institutions need to reach an agreement in law that creates legal certainty, and properly balances innovation and people-centred AI. The launch of the AI Pact in 2024 will be a very important step towards accountable and transparent governance of AI. And, for this to happen, open dialogue with business incorporating their expertise will be key.
The Hiroshima Principles of Artificial Intelligence
In May 2023, G7 countries agreed to prioritise collaboration for inclusive AI governance, seeking to develop secure and reliable systems while maximising the benefits of the technology, including for developing and emerging economies, by bridging digital divides.
The Hiroshima principles for advanced AI models, including foundational models and generative AI systems were endorsed by the G7 last October. They are a set of 11 international guiding principles intended to apply to all AI actors and cover the design, development, deployment and use of advanced AI systems.
These principles have in turn served as the basis for the creation of a code of conduct for AI developers. Adherence by companies to these principles is voluntary and would logically have to be adapted to the specificities of each jurisdiction.
It is important to remember that these principles build on the OECD’s AI principles, and can therefore be seen as an evolution of these and a rapid response to recent changes in advanced artificial intelligence systems. In addition, they promote greater specificity in the actions to be taken by organisations that choose to subscribe to the code of conduct” in accordance with a risk-based approach”.
In this context, the OECD is also in the process of reviewing the principles and definition of artificial intelligence due to the rapid evolution of generative artificial intelligence models.
This is an important step forward in the search for a global governance model, which should seek interoperability of regulatory frameworks, to provide certainty and reliability for the development and adoption of this technology.
The role of business in the global governance of advanced artificial intelligence models
These two proposals ground the aspiration to implement regulatory and principled schemes for trusted, secure and person-centred artificial intelligence. We are now entering a new phase. Artificial intelligence is not limited to national borders and therefore its governance requires global solutions and approaches.
Businesses have come a long way so far by adopting self-regulatory principles in favour of responsible artificial intelligence in accordance with fundamental human rights, democracy and the rule of law. All of these are very much in line with the proposals of international organisations such as the OECD, UNESCO and the European Union.
European Pact for AI
The European proposal seeks to move forward through accelerated implementation of the regulation on the basis of voluntary adherence by business. The AI Pact will be a success if the negotiations in this last leg of the negotiations result in a regulation that clears some of the doubts raised by business and some Member States. It is very important that the definitions of generative AI systems and foundational models are adapted to their actual application. There cannot be a dissonance between theory and reality. In addition, a risk-based approach must be preserved in which each actor assumes obligations commensurate with its role and capabilities. It is critical that the regulation is able to guarantee people’s rights, safety and health while at the same time fostering innovation and business competitiveness. That is the key.
The G7 principles
The Hiroshima process, for its part, is a broad guide to conduct that allows in a flexible framework a global understanding of the limits of the development and use of artificial intelligence. Hence, the process of consultation with business will be critical if it is to become in practice the reference framework for reliable and safe artificial intelligence of advanced AI systems.
In conclusion, technology is changing the processes, forms and timing of regulation, as well as the principles. Only substantive and iterative public-private collaboration will be able to move towards governance of artificial intelligence, including generative governance. The complexity of artificial intelligence systems and the speed of innovation requires flexible schemes, with rapid adaptability to innovation, as well as constant dialogue between the parties.