Search Menu

Towards Global Governance: Progress on AI Regulation in Europe

Recent developments in the regulation of AI in Europe mark an important milestone in the governance of this technology. It is essential to develop public policies and regulations that promote the responsible development of AI and the protection of human rights worldwide through international cooperation and dialogue.

Towards Global Governance - Progress on AI Regulation in Europe

The latest milestones in the regulation of Artificial Intelligence (AI) at the Council of Europe (CoE) and the EU mark a new stage in the governance of AI. These developments reflect the European vision on how to ensure an ethical approach, ensure human rights and transparency in the development and deployment of AI, laying the foundations for responsible and comprehensive governance of this emerging technology.

Subscribe to Telefónica’s blog and find out before anyone else.





First international Treaty on Artificial Intelligence and Human Rights

This treaty, the first of its kind, will ensure that the rise of Artificial Intelligence respects the Council of Europe’s legal standards on human rights, democracy and the rule of law. Its finalisation is an extraordinary achievement

On 14 March, the Council of Europe’s Committee on Artificial Intelligence (CAI) finalised the draft Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Although the agreed text will ultimately not be binding on the private sector, it will require governments to assess the potential risks posed by companies’ use of AI. For example, the latest draft largely leaves it up to countries to determine how they will ensure that the private sector complies with the treaty.

The agreed draft text also excludes AI systems developed or used for activities considered to fall within the scope of national security or research activities. However, these systems will have to include the necessary measures to ensure compliance with international law.

Once adopted by the Council of Europe’s Committee of Ministers in the coming weeks, countries around the world will be able to join the treaty and uphold the high ethical standards it sets.

A pioneering model of cooperation in the international arena

It is important to applaud the vision, openness and inclusiveness of the CoE throughout the process of drafting the Convention. It should be noted that this international treaty is the result of a collaborative framework in which non-member states of the CoE (USA, Japan, Canada, among others) contributed as observers on the basis of shared values and objectives. This collaboration has not always been easy, but it has certainly been crucial. The Convention achieves the right policy balance precisely because it has benefited from the input of governments and experts, as well as industry and civil society.

The Convention will undoubtedly be a global instrument, open to the world, in coherence with the principles and codes of conduct adopted in other bodies (OECD, UNESCO) and international processes (Hiroshima process), and is aligned with the EU AI law.

Shaping the future of artificial intelligence in the EU

Meanwhile, in the European Union, the European Parliament formally approved the proposed Artificial Intelligence Act on 13 March. Final adoption by the Council is expected in April. This legislation is a pioneering global effort to establish a risk and impact-based framework of obligations to ensure the safety, fundamental rights and sustainability of AI systems in the EU. It bans specific applications such as biometric categorisation and emotion recognition in the workplace. It allows the use of remote biometric identification systems in public spaces for law enforcement purposes, subject to judicial authorisation and restrictions. General purpose AI models will be subject to strict obligations and oversight.

As the first jurisdiction to adopt specific legislation on AI, the EU is positioning itself as a leader in the responsible governance of this technology. Telecom companies must adapt to this new regulation, adopt a culture of ethical AI deployment and prepare for additional future legislation following the EU elections in June. Strategic adaptation and ensuring the ethical use of AI are becoming strategic priorities for companies in the sector.

Self-regulation in AI governance: Telefónica’s model as a benchmark

In this context, self-regulation in Artificial Intelligence (AI) plays a crucial role, as institutional timelines are not keeping pace with innovation. Public and private entities have a responsibility to establish internal standards and oversight processes to ensure that AI systems are designed and deployed in an ethical manner, respecting human rights and avoiding potential harm. Self-regulation allows for greater flexibility and adaptability as technology advances, enabling organisations to keep up with the latest developments and ethical challenges in the field of AI.

Telefónica already established a governance model for Artificial Intelligence in December 2023, in line with European regulatory changes and based on ethical principles and trustworthiness assessments since 2018. This model, which emphasises business involvement, functional coordination and risk orientation, will be key to the company’s technological advances, while AI is seen as an opportunity for well-being, economic growth and positive social impact.

While the adoption of the AI law is an important step, it also marks the beginning of a nuanced implementation phase, as the complexity of some of the legislative mandates should lead to an ongoing dialogue between the public and private sectors, especially in the setting of standards. Moreover, the possibility of additional AI legislation following the EU elections in June has already been anticipated, underlining the commitment to continuously refine and expand the regulatory landscape around AI technologies.

International cooperation and dialogue as a model for addressing AI challenges

In an increasingly interconnected and technology-dependent world, it is essential that we address the challenges of AI on a global scale. The Council of Europe’s AI Convention and the EU’s AI Law are important steps towards stronger and more ethical regulation of this technology in Europe. However, to maximise their impact and ensure effective implementation, it is crucial that these initiatives do not remain confined to European borders.

It is imperative that we seek global approaches that transcend geographical and cultural barriers. The principles and codes of conduct established by international bodies such as the OECD, UNESCO and the Hiroshima process are important steps in this direction. We must work together to develop norms and ethical standards to guide the development and use of AI worldwide, ensuring that this technology is used for the benefit of humanity and in full respect of fundamental rights.

In this regard, governments should continue to work with industry, academia and civil society to develop policies and regulations that promote the responsible development and use of AI. This can include the establishment of AI ethics and advisory committees, the implementation of transparency and accountability standards, and the promotion of education and digital literacy to enable people to understand and participate in the debate on AI regulation.

Only through international cooperation and dialogue can we effectively and equitably address the ethical, social and regulatory challenges posed by AI. Now more than ever, it is time to join forces and work towards a global regulation of AI that promotes responsible innovation and protects the interests of all people around the world.

Share it on your social networks


Communication

Contact our communication department or requests additional material.

Celebrate with us the Telefónica Centenary
START THE ADVENTURE
Exit mobile version