Search Menu

Can AI replace human emotional intelligence?

Aspects that were not taken into account with the advance of AI. The inexorable logic of the algorithms is supposed to guarantee us a life without errors, but the programmers themselves are beginning to turn on their alarms, as artificial intelligence is apparently unable to deliver everything it promises us. Is it really as intelligent as we think? What are the limits of artificial intelligence?

la Inteligencia Artificial podrá o no sustituir la Inteligencia Emocional Humana

Graciela Ares

Artificial intelligence is considered the key technology of the future, facilitating the work of doctors, psychologists, police officers and could replace car drivers as well as any kind of network players, and in all areas of everyday life. Artificial intelligence could help us make the best decisions, whether in a game of chess or when we are driving and looking for a specific direction. It will also be able to guide us on a shorter route with less traffic, date one candidate or another, and so on.

Subscribe to Telefónica’s blog and find out before anyone else.





According to an analysis of the video DW Documentary, the multimedia voice of Germany, in which an in-depth study of all the positive aspects of AI for humanity and also those that go against our development as humans, their different views on technological progress, their moral parameters and methods used by AI to achieve a human model without errors, the following conclusions are evident:

Its origin

According to Antonio Casillo, sociologist at the Telecom Paris School of Technology, “we were initially conquered by automation, which sought to reduce physical effort, i.e. the amount of force used”. Mills, for example, were considered an automatic process for centuries, but over time, this logic began to be applied to immaterial or intellectual work as well. Today, we are facing a new form of automation, which we loosely call artificial intelligence.

In the 1950s, development began to accelerate at a dizzying pace and artificial intelligence is supposed to optimise our lives. It is expected to drive cars, improve education, guide us in healthy nutrition, make the best medical diagnoses, or find in an intelligent chat the best way to comfort us. 

These advances are limited until the early 2000s, when powerful new mainframe computers begin to process huge amounts of data. 

According to Eric Sadin, a philosopher, “these systems aim to digitise all of reality with the goal of knowing everything in real time“. We were captivated by this aim as it had something of the “Divine” about it, regardless of the fact that much of reality escapes digital reduction.

Cinthia Rudin, a researcher in computer science at Duke University, says that there have been great advances in image recognition, thanks to the deep learning system: multi-layer learning. It sparked a fever in the field of artificial intelligence. This image recognition system used to make mistakes, one in three times, but in 2012 a machine learning-based technology was able to reduce the error rate to 15 percent.

In classical artificial intelligence based on symbolic representations, the machine had to be fed with knowledge. Then it turned out that deep learning works much better, because the computer itself is left to work, rather than being told how to process the information.

Deep learning has its origins in cybernetics, an area of information where computer engineers take various inspirations from neuroscience. With this new method, programmers no longer describe to the machine what a face looks like, but the machine learns it for itself.

The system resembles a network of connections modelled on our neurons in the brain. This artificial neural network makes it possible to present a variety of configurations, and thus to plan or attenuate the signals between the connections, in order to arrive at an output signal. This provides a final answer to a question such as: is there a face in the picture? In other words, a series of photos of a person’s face, its parts such as the forehead, nose, chin, lips or eyebrows finally make the system interpret that it is a face, and these intelligent systems, in a sequence, can already identify, for example, a “human face”. Of course, this model imitates all the examples given by humans themselves.

Today, this model can be found in photo machines or mobile phones that can focus directly on faces, as well as in video surveillance rooms, in postcode or number plate readers. Researchers at the University of Michigan set out to study the extent to which the systems are effective when the appearance of what is being reported is changed slightly.

In radiology clinics, where automatic reader technology is being tested, the artificial intelligence does not yet assist the doctors, but must be supervised and further trained by them. These are very fragile systems that are only useful with images that are very close to the training data.

If the artificial intelligence data system was trained with a specific group of patients, i.e. with very specific data from a certain group to be studied, it may not work in a different environment. Humans think more broadly, we are able to take into account things that are not necessarily in a database, we can think about how a specific model works and decide whether to trust it or not. Artificial intelligence cannot do that.

One controversy of man is that he has started to humanise this deep learning system, on which the AI data system is based, which may provide what happens in an image, but we forget that the model understands only what it interprets in an image or text, and that it does not feel emotionally what it can see there. The way in which the model relates an image to a text is a completely different procedure than when humans look at an image and describe it with emotional words or with some memory of it.

The world knowledge of these systems is incomplete by definition, lacking the bodily experience, the visual experience, the word association and what they refer to in the real world, the purely human interpretations. As long as it is not possible to include this facet, these systems will remain deficient.

In the human being, experience makes things acquire meaning, the force we make when we eat an apple, the wrinkling of our nose when we taste some astringent citric juice, that sensation we feel when our brain associates or searches in memory, as we experience when we eat a slice of lemon, in a given moment or also in an imaginary supposition, without us tasting any lemon, although this does not happen in reality, our brain still associates in its memory, what we feel in our mouth, and it is instantly reflected as a sensation of that moment.

For a computer system, on the other hand, it is a sequence of pixels linked to textual information.

What I want to emphasise, looking at the neurophysiology of our brain in relation to how it works, is that our exaggerated desire for perfection freezes our thinking, and does not allow us to see the potential uses of learning from mistakes. This is the other view, with a fundamental value, the one that defines us as human beings, is freedom of thought, and it is there, where we are reducing ourselves, to the extent that we delegate our decisions to a computer system. The ultimate goal is to eradicate any error, and to do so, these systems must take control.

If our brains were to switch to a logical system of thought, as the interpretative systems of AI systems work; we would lose our mental flexibility, it would be very boring if we were only logical, calculating and totally free of any error. After all, we do not live in a static world. Change is what drives us forward. We adapt to change constantly.

As soon as the brain makes a mistake it not only tries to correct it, but also uses it productively. It is precisely because a mistake offers the potential for improvement to a flawed system that it has prevailed, in our evolution as humans. This is the price we must pay for our ability to remain flexible. The art is not to avoid mistakes. Anyone who tries to avoid mistakes will become as dull as a computer. And, what is worse, replaceable. So sooner or later, algorithms will be able to avoid mistakes and perform an action efficiently and without errors. But recognising that an error can have a purpose is an ability that only humans have. We now understand that the brain almost systematically incorporates errors in order to examine them and thus alter its behaviour. And this is a very important lesson to learn: “to err is human” and, for the brain, it is extremely useful“.

If we never made a mistake, we could never change. Then we would not only be incapable of learning and very boring, we would also, sooner or later, be changed by the most efficient computers. The unique feature of human thinking lies precisely in the fact that it is neither accurate nor perfect. Our error-prone thinking is the only thing that makes us superior to computers. Essentially our “weakness” in thinking is really our greatest mental secret weapon.

From the above analysis, and speaking of emotions, our “human weakness“, we might also ask, how does Artificial Intelligence interpret the emotions in an image or message, since they give humans away, in the micro-gestures that are evident on our faces as we walk through them.

Despite their rudimentary perception system, advances in automatic image recognition and AI interpretative systems have deepened the dream that machines can develop emotions and even help us to recognise the most hidden feelings and emotions of our fellow humans. They can recognise more than twenty emotional intensities.

Hence these questions: Will Artificial Intelligence finally be able to tell us how the Mona Lisa feels about Da Vinci in an image that represents the work?

How does one develop such an automatic emotion detector?

The first step is to make a list of emotions from the endless variety of our moods. Following research in New Guinea by Paul Ekman, an American psychologist and pioneer in the study of emotions and their facial expression, it became clear that humanity shares six universal feelings that can inevitably be read on our faces: joy and sadness, disgust and anger, surprise and fear.

This theory by Paul Ekman evaluates the study of emotions in their micro mimicry, where the existence of the truth of people expressed in their face is evidenced and affirmed.

Although this classification is very controversial among scientists, it often serves as a basis for all computer scientists in the recognition of emotions, precisely because of its simplicity.

These six universal emotions are the starting point. The second step is to get human classifiers to assign thousands of faces to these six categories, thus creating the training data for the machines. Finally, machine learning begins, until the computer system manages to produce the same results as the human classifiers. Once the best configuration is found, the systems become a tool that all programmers use as universal emotion detectors.

It is considered that through these advances, one can believe in the idea of building lie detectors with which one could evaluate a suspect and determine whether he is lying or not, which in a way will define whether he is released or put in prison.

In artificial intelligence, emotion analysis is done by combining micro-mimics with tone of voice. And in studies of images and faces in works of art, this conclusion was reached: “They don’t feel anything“.

The AI has no taste buds, so it has no idea of the delicious taste of a lemon cake that revives the memory of an aunt or grandmother. Nor has it felt adrenaline in the body and the sensation it causes in a good game well played. He doesn’t know what it means to get teary-eyed with emotion at a film, or to feel our nose running from sheer excitement. He is not afraid of anything, his skin does not crawl, he knows neither physical pain nor pleasure. It has no opinion of abstract art or any repressed trauma, therefore, it has no emotion that it wants to express.

We can see that the computer industries dream of highlighting in humanoid robots two aspects that distinguish human capacities: human logic versus computer logic, the latter of which still has undeveloped aspects.

The real talent of these neural networks is not so much to mimic human qualities, but to assimilate huge databases, to classify, analyse, and deduce correlations, thus helping humans to understand complex codes such as, for example, understanding the quantum magnetic field, or also playing an important role in the pharmaceutical field that determines infinite combinations of chemicals for a given disease.

A major disadvantage from a social point of view comes from social psychology, as it probes our stereotypical behaviours, which are the basis of computer systems that are intended to guide us in personal and collective decisions.

In these types of systems with which AI works, there are always biased results, these biases may be due, for example, to the existence of biases in the training data, one is aware that this training data was added by a human being, who may have a biased judgement with a certain analysis of the data provided. Therefore, the result of the bias leaves the evidence.

The social, psychological and moral context will remain incomprehensible to computers, but if these systems lack more complete criteria, of which humans often ask themselves these questions: how is it that Artificial Intelligence is based on, to make a decision?

Statistics show that they lead to racial and gender discrimination. If the penal system is racist, the data will also be racially biased.

The advance of Artificial Intelligence fits perfectly into all our fundamental laziness, because in this day and age, it offers us the convenience of taking over part of our daily tasks. But in an opposite view, we see that one of man’s greatest challenges today is to take charge of his individual or collective destiny, yet with the advance of AI, it is the one that guides us to do the opposite, and in many social spheres.

Artificial intelligences never reach a level of total performance that makes humans dispensable, giving rise to a new profession, that of human assistant to machines in distress. This new form of human labour, behind the so-called artificial intelligent systems, was invented by the technology giant Amazon, where people do by hand, so to speak, the work necessary for the algorithms to function. The risk posed by algorithmic decision-making is that it is not known how the AI decisions were made, as only their results are visible.

We are facing a paradoxical situation: on the one hand, people are asked to do what robots or automatic processes are not capable of doing, on the other hand, employees are faced with a job whose margin of action or autonomy is less than before, controlled by an artificial intelligence that studies objective parameters and metrics.

These interpretative systems serve to dynamise optimisation and productivity without allowing the negotiation of those who interact in this dynamic. The work of people is subjected to the controls of machines that calculate the optimisation of the flow of products, they determine the rhythm of the processes and people are reduced to functioning as flesh and blood robots.

How far will AI advance, and at what point will people want to collaborate with it, to be subjected by these algorithms to its performance as a human machine?

Share it on your social networks


Communication

Contact our communication department or requests additional material.

Celebrate with us the Telefónica Centenary
START THE ADVENTURE
Exit mobile version