Artificial Intelligence: innovation, ethics, and regulation

Artificial Intelligence (AI) is in the spotlight of companies and policy makers. It is an emerging technology with great potential, a key lever for industrial competitiveness and societal good. But AI presents not only opportunities, it also presents challenges. How should society address them?

Artificial Intelligence: innovation, ethics, and regulation

Reading time: 8 min

An unprecedented step was taken this week. The European Parliament has approved the first-ever Artificial Regulation proposal. This is not the end of the line and the EU interinstitutional negotiations, will now begin in order to reach agreement on the final text.

This Regulation will set a global standard. The purpose is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy, rule of law and the environment from potential harmful effects of artificial intelligence systems.

However, the approach is controversial when it regulates not only the impact of technology or its use, but also the technology itself, including foundational or generative AI, potentially hindering innovation.

The advent of the AI Generative tools and its impact on ongoing governance models

The rapid adoption of ChatGPT and the controversy it has generated may be one of the underlying reasons for the swift reaction from the EU side on its AI Act proposal. These AI tools will increasingly be used for a range of business functions and their societal impact has raised growing concerns.

In the private sector some top representatives from the AI industry, including OpenAI CEO Sam Altman and CEO of Google Deepmind Dennis Hassabis, declared that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war”.

While others question this view. Marc Andreessen, innovator and co-creator of the first web browser Mosaïc, who claimed more than 10 years ago “software is eating the world”, recently stated in a post “Will AI save the world?”, in response to the growing debate about the risks of AI.

There is a common agreement on the need to diminish the potential risks of AI. However, at Telefónica we do not favour a culture of prohibition or regulation of the technologies themselves. Regulatory changes do not keep pace with rapid technological evolution and any regulation focused on technology will soon become obsolete, with potentially negative consequences for society and the economy.

In turn, we believe that responsible digitalisation must underpin the way we design and use technology, including generative AI. Telefónica therefore includes ethical principles and transparency requirements in its AI governance model.

The next steps of the AI Act for a positive societal and economic impact

In their negotiations, the EU institutions cannot afford to lose sight of the fact that artificial intelligence is a technology that will have a very positive transformative effect on our economy and society. It is a game-changing technology, and a driver of regional and business competitiveness. The resulting AI Act should also support innovation and improve the functioning of the internal market.

Indeed, in the age of data, Artificial Intelligence technology has become a key lever for industrial competitiveness, creating competitive advantages and becoming a geostrategic dimension for countries. AI can promote innovation in services and new business models, generate efficiencies, as well as bring positive social impact.

The key question is delivering innovation with a human centric and trustworthy approach. In Telefónica’s public positioning we show our experience and commitment to improve prosperity, while safeguarding people’s rights and our model of society.

The three pillars of governance: global guidelines, self-regulation, and a regulatory framework

In our view the three pillars of AI governance are global guidelines, self-regulation, and a suitable regulatory framework. We believe that the scope of the EU AI act should be limited and complemented by regional guidelines in line with global agreements and responsible behaviours of public and private actors based on ethical principles by design.

Regional and Global guidelines

The scope of Artificial Intelligence is not confined to national borders and therefore requires global solutions and approaches towards greater legal certainty and protection of people’s rights. We need guidelines and cooperation to foster a global convergence of ethical principles and practice. We welcome new developments in this respect.

In Europe, the announcement of the AI Pact, a voluntary commitment by industry to anticipate the AI Act, should serve to provide certainty in the implementation of this regulation, support innovation and improve the functioning of the internal market.

In addition, international initiatives have been approved to address the challenges and promote the ethical use of Artificial Intelligence. In May 2023, G7 countries agreed to prioritise collaboration for an inclusive AI governance. Governments emphasised the importance of forward-looking, risk-based approaches to trustworthy AI, in line with shared democratic values.

In the same vein, during the fourth EU-US Trade and Technology Council (TTC) ministerial meeting held on 31 May 2023, the TTC agreed to explicitly include generative artificial intelligence systems, such as ChatGPT, within the scope of the Joint Roadmap on evaluation and measurement tools for trustworthy AI and risk management. Margrethe Vestager, the European Commission Executive Vice-President, announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems and to encourage companies to voluntarily sign up.

Self-regulation

Additionally, self-regulation presents significant opportunities. First, the pace of AI development and innovation far exceeds the speed at which norms are adopted, which often take years. Secondly, its complexity makes it difficult to determine general a priori regulations that are applicable to different situations, which could inhibit innovation.  Thirdly, for those uses that are not considered high risk, self-regulation is more efficient from a financial and administrative point of view.  And finally, under no circumstances does it undermine the protection of people’s rights, health, democracy and safety; on the contrary, it contributes to improving digital services and broadening individual and collective opportunities.

Telefónica has approved ethical principles for AI that apply throughout the company. These principles apply from the design and development stage, including the use of products and services by company employees, as well as suppliers and third parties. The application of these principles is based on a “Responsibility by Design” approach, which allows us to incorporate ethical and sustainable criteria throughout the value chain.

Regulatory framework

The approach to the debate on the regulation of Artificial Intelligence requires a holistic vision that combines international cooperation, self-regulation, the setting of appropriate public policies and a risk-based regulatory approach. All this with the dual objective of mitigating risks and building a human centric and trustworthy framework for innovation and economic growth. In turn, this approach would favour the ethical use of technology and the technological uptake.


Communication

Contact our communication department or requests additional material.

close-link
Telefónica Centenary logo Celebrate with us the Telefónica Centenary
START THE ADVENTURE