Search Menu

Telefónica’s challenge for the responsible use of AI

Telefónica highlights its commitment with a responsable use of Artificial Intellegence in its new challenge.

Emerging technologies, such as Machine Learning or Artificial Intelligence (AI), are increasingly used across our current societies and it is expected that its application will grow exponentially in the coming years.

Subscribe to Telefónica’s blog and find out before anyone else.





 

AI technologies trained from massive amounts of data can learn from patterns and eventually can make autonomous decisions. However, the development of advanced AI systems would not be possible without access to huge data sets (such as voice recordings, photos, etc.). Since data determines how AI and Machine Learning operates, how do we ensure that it is not biased and thus results in replication of unfair behaviours observed in real life? The debate about ethical use of data is just at the starting point.

 

At Telefónica we consider that the use of Artificial Intelligence and algorithms should be human-centric, ethical and avoid undue discrimination. As we said at our Digital Manifesto, in the same way as environmental impacts of production are today seen as a corporate responsibility, businesses will be held accountable for the impact of AI on societies

 

Therefore, with the purpose of advancing from AI principles to responible AI, we have organized a Challenge for the responsible use of AI. The objective is twofold:

1. To find out whether the concerns are limited to a few highly visible cases, or whether they are potentially happening on a much larger scale. Some topics of interest for this objective include, but are not limited to:

  • Detect, explain and visualize cases of unfair discrimination due to improper use or implementation of AI systems.
  • Identify and visualize Open Data sets that contain undesired bias potentially affecting protected groups.

 

2. To develop tools and/or algorithms that help detect and mitigate the concerns, for instance:

  •  Tools for explaining the conclusions reached by an AI algorithm towards mitigating the fear of “unexplainable” AI.
  • Tools to detect bias in data sets related to sensitive data (impacting protected groups).
  • Tools to detect correlations in data sets between normal variables and sensitive variables).
  • Tools to re-identify anonymized data of public data sets.
  • Tools to detect unbalanced outcomes of algorithms within sub groups of the population regarding false positives and false negatives.
  • Methods & tools for providing an “ethical” score of data sets.

 

The challenge is open from Nov 14 to Dec 15 (2018) and winners will be notified before 31 Dec.

 

Register now and do not miss the opportunity to share your ideas on how to achieve a more responsible use of AI!

 

 

Share it on your social networks


Communication

Contact our communication department or requests additional material.

Celebrate with us the Telefónica Centenary
START THE ADVENTURE
Exit mobile version