Resilience in the age of artificial intelligence

It is common knowledge that artificial intelligence (AI) is transforming the world as we know it at breakneck speed, and it is already the fastest adopted technology in history, but what role does cybersecurity play in this new world?

Reading time: 4 min

As we rely more and more on these tools, the institutions, companies and individuals who work in cybersecurity on a daily basis are very concerned about the vulnerabilities that AI has, just like any other technology.

Furthermore, the more powerful a digital tool is, the more massive and attractive it becomes to malicious actors. In this context, guaranteeing the security of AI systems is not an option, but an urgent necessity that any organisation must consider before and during the implementation of any process governed by artificial intelligence.

Cybersecurity in AI, an ideal marriage

AI needs protection, especially focused on preventing data theft and the manipulation of the models and processes behind each use case.

The structure and design of each AI-based application is complex and this can lead to many doors being left open through ignorance. In addition, use cases for critical company processes are increasingly common and great care must be taken to ensure that an attack on AI does not interrupt an activity that could render the core of a business or institution inoperable.

There are already many known attacks such as those based on deepfake for impersonation and fraud or model theft through extraction attacks, where attackers extract trained models through queries via public APIs, recreating their logic for malicious uses.

What are the main cybersecurity threats in AI?

In general terms, nowadays (because the types of attack are also multiplying with each passing day) we can talk about these threats or types of attack:

  • Attacks on the infrastructure, plug-ins or agents supported by AI: the classic in cybersecurity, like any other connected infrastructure, is susceptible to attack (access violation, DDoS attacks, malware, etc.).
  • Data theft and manipulation: AI models are programmed to consult large amounts of data, and data has always been a treasure to be obtained by cybercriminals. In addition, modifying the data can manipulate the results.
  • Theft of AI models: knowing how an AI is implemented can be a way in to manipulate it in favour of the cybercriminal’s objective and thus be able to extract data, evade or modify the behaviour of the AI, etc.

Furthermore, the OWASP (Open Web Application Security Project) is already working on publishing the most common vulnerabilities and necessary mitigations to develop and protect generative AI applications and language models.

What can we do to protect AI?

What is clear today is that we must start protecting these types of implementations from the very early stages; the path to AI must be resilient by default.

Furthermore, we must be very aware that, within organisations, there are two very different worlds to protect: the users of generative AI and the departments that are creating their own AI applications.

If we focus on the world of AI developed internally by organisations to improve and automate business and corporate processes, it is important to protect the development and deployment stages, as well as protecting the data with which we feed these AIs. In this world, doing the basics of AI protection means having control over the different AIs that are being used (even being able to make automatic discoveries), knowing which users are using them, knowing the data they are accessing, protecting the models against known attacks and continuously monitoring the behaviour of the models in real time to detect malicious patterns.

If we focus on the world of users who use generative AI applications, we must pay special attention to knowing which apps are used and who uses them, what confidential data may be shared and what authorisation policies the corporation will have for the use of this type of application. Nor can we forget the main pillar that we should always bear in mind in this world related to cybersecurity: awareness, awareness and awareness.

Conclusion

Artificial intelligence is unstoppable and is rapidly transforming the world, but its vulnerabilities increase the risk of attacks, so protecting it must be an important issue to address before embarking on its use or implementation in organisations. It is essential to safeguard both models and data, so establishing AI usage policies, protecting AI from the moment of its development and monitoring its continuous use is the key to a resilient path to AI.

Share it on your social networks


Communication

Contact our communication department or requests additional material.

Background formBackground form mobile

Subscribe to Telefónica's blog

For example, [email protected]

close-link