AI, NLG, and Machine Learning

Emotion Recognition Bots Are the Future of AI

The advent of bots that can analyze facial expressions, along with popular applications that potentially threaten user privacy, leads to new conversations around the ethics of utilizing faces in artificial intelligence (AI) technology.

By Adrienne Morgan
November 25, 2019

Seeing the faces behind FaceApp

Earlier this year, an app called FaceApp started making headlines. Allegedly, it could access private metadata on individual phones—unbeknownst to users—and could even upload entire camera albums to remote servers. This caused quite a stir and opened a wider conversation about exactly what users agree to when they click Agree on the Terms & Conditions, as they download a new app.

FaceApp has received hundreds of thousands of downloads.

FaceApp was mainly a photo-editing app with the ability to take a photo of a user and make it look older or younger. Although this concept certainly isn’t new, FaceApp seemed to be taking it to another level, with the photos looking so accurate and true to life that the internet was briefly swept up in a viral trend of doctored photo posts across social media platforms. Eventually, the controversy around potential privacy breaches put a damper on the trend, although further research determined that FaceApp wasn’t stealing user information, as originally feared. However, the story led many to start considering a bigger question: In the new age of facial recognition and artificial intelligence, what exactly can our faces be used for?

Recognizing that the future is now

Facial recognition technology is already quite popular and is actually currently in use in countries all over the world. You might not have known it, but a security camera might have recorded you during your morning commute, adding your face to a database of tens of thousands of faces of people who likely didn’t even know they were being recorded. Even though this might sound a bit 1984-esque, it’s important to remember that this technology is still in its early stages.

There was a time when posting statuses online to describe your current mood seemed odd and off-putting; now, it’s part of our daily lives. That being said, there have not yet been any major rules or regulations applied to the facial recognition technology industry. This has led to an outcry from groups that believe the information being captured could be used for nefarious purposes.

Identifying criminals

When considering the positive aspects of facial recognition technology, there are a number of potential benefits. Imagine, for example, a security camera captures footage of a crime taking place. Using facial recognition technology, authorities are able to identify the criminal, an undertaking that might have previously been impossible as a result of lack of evidence or other mitigating factors. These kinds of applications are leading the argument for the widespread implementation of the technology.

Facial recognition criminals
Facial recognition technology can be used to stop crime before it happens.

Facial recognition can also be used to add an extra layer of security for everything from mobile phones to bank accounts, especially since faces are much harder to hack than passwords. In addition to this, authorities have argued that facial recognition technology can be utilized to stop potential terrorist attacks—before they happen—by cross-referencing the faces of offenders caught on camera against databases of active cell members. Interpol has implemented facial recognition technology to identify terrorists who are using fake names to cross national borders and infiltrate foreign communities. The undertaking was labeled Project First (for Facial, Imaging, Recognition, Searching, and Tracking), and it has opened an information pipeline between countries to use biometric data (such as faces or fingerprints) to identify terrorists and to apprehend them before an act of terrorism is carried out.

Working with emotion recognition bots

There has also been the advent of emotion recognition bots, which are able to churn out an analysis of an individual’s mental and emotional state, based on their facial expression at any given moment. This process requires two moving pieces that come together for the end result—computer vision, a programming system that allows the technology to identify specific emotions, and machine learning techniques that are used to analyze those emotions to come to a conclusion about the individual in question.

To say the least, this technology can paint in broad strokes. Emotion recognition bots can call upon a variety of factors to reach a final analysis of their subjects, including body language, vocal tone, facial tics, and even the direction in which an individual happens to be looking when they are being analyzed. Authorities argue that this technology could be used to identify potential criminals, but research has shown that the behavior someone is exhibiting or the expression on their face does not imply a direct correlation with any penchant for social deviance.

For now, the most necessary step for the growth and acceptance of emotion recognition bots and facial scanning technology as a whole is the implementation of rules. A lack of proper legislation on the issue can lead to widespread abuse, and as our world enters an era of rapid development in the AI industry, it is more important than ever to ensure that standards are put in place to prevent infringement upon civil liberties.