Affective Computing II – teaching computers human emotions
Posted 28 Mar 2019
Lifestyle apps, fitness tracker or pedometers – self-tracking is part of many people’s lives’. As self-evident as this is today, one should not forget that it was completely different just a few years ago. What is absolutely common today, was an innovative trend five years ago – that many looked at with skepticism. Self-tracking is about keeping an eye on yourself and transferring all data conscientiously into apps. Wouldn’t it be the next logical step to transfer this task to computers – or robots? Which brings us to affective computing.
However, there is still one hurdle to overcome. As humans we tend to commit something to others only if we really trust them. And trust is built through certain experiences and emotions. That’s because we used to need emotions to survive in earlier times – will artificial intelligence (AI) need them for survival, too? And if so, what does this exactly mean?
Gestures and facial expressions are an essential part of our communication. As a result one can tell if we are happy or sad from our countenance. But not only children learn to assign certain expressions to corresponding emotions – artificial intelligence is meant to do the same in the future. According to estimates, around 65 to 90 percent of our communication is non-verbal. Most of the time, we are not even aware of these signals and we cannot control them. This is especially true for micro-expressions – the part of countenance that appears on our faces for a fraction of a second. This kind of facial expression is out of control and can hardly be imitated. A fact that TV series like Lie to me are all about – in this show lies are being revealed through the recognition and interpretation of micro-expressions. Using affective computing, artificial intelligences learn to interpret emotions concretely.
While untrained people often miss such signals, cameras can capture every detail of them. Beyond that, they are able to interpret them – if they are paired with a self-learning artificial intelligence. Affective computing means to train computers to scan and analyze faces for emotional expressions. The machines are initially taught six basic emotions: sad, happy, surprised, fearful, angry, and disgusted. Over time, more than 20 additional measured variables will be added.
As we explained in our article “When computers learn to love“, researchers have been training artificial intelligence in the interpretation and meaning of facial expressions for some time. Researchers at MIT have also been implementing affective computing for several years, resulting in the development of software that makes the relationship between man and machine appear more human. To train the AI, facial expressions of more than five million people have already been recorded and evaluated.
For affective computing to work, the computer must be able to interpret facial expressions. A smile is usually interpreted as joy, but sometimes a smile can also express glee or shame. Or it is just an artificial, not serious smile. In order to draw the right conclusions, other factors are also taken into account.
The voice also reflects many characteristics that allow us to draw conclusions about our emotions. The platform Beyond Verbal is able to divide emotions into different categories – including anger, sadness, happiness and friendliness – through the intonation of the voice. Knowing that, it is not surprising that systems like Siri, Alexa or Cortana will soon be able to interpret our feelings according to our voice and learn to react accordingly.
Recognizing and interpreting facial expressions for the advertising industry and market research has a great appeal. Being able to recognize what emotions a commercial triggers and whether the willingness to buy a product increases or decreases as a result, or whether you recognize a brand or not, is important for advertisers.
But there are also interesting developments in the home network industry. The more Internet-enabled devices installed in our homes, the more our interaction with these devices will change. The next step might be communicating with computers that make use of affective computing in order to react to our mood, thus being able to suggest suitable music or films to end the day in a relaxed way.