AI, NLG, and Machine Learning
AI vs. AI: The New Software Helping Protect against Facial Recognition
AI facial recognition is a powerful tool to identify people—for individuals, businesses, and government organizations. If you’re worried about firms getting unwanted access to your online photos, a new AI tool might be able to help.
By Adam Westin
April 8, 2021
It once felt as though AI facial recognition entering the realm of everyday life was the stuff of far-out science fiction, but in this day and age, such tools are advancing faster than ever before. As AI face detectors have become more widespread, their applications are far-reaching, providing assistance to people and society overall. For example, Apple has developed the proprietary Face ID technology for their recent phones, allowing millions of iPhone users to lock their sensitive information to their unique biometric footprint. These same tools have also been employed to keep society safe and secure, being used for everything from tracking wanted criminals to helping stop human traffickers.
Although this software has proven to be a valuable tool for citizens concerned about individual privacy, it can also be seen as a potentially dangerous instrument for causing harm. This has pushed developers to build new tools to serve as workarounds for users concerned about privacy and the leaking of their personal information.
What is AI facial recognition?
For users unfamiliar with it, the software and mechanics can appear quite daunting. Fortunately, with a few key pieces of information, anyone can come to understand the basics of these tools.
In a nutshell, it’s best described as a system of processes to identify and verify a person's identity solely using their facial features. A camera connected to an AI system can capture, analyze, and compare these features to those in a database, confirming that the person in the photo matches the person in the database, identifying them with such precision that some have said facial recognition software has near-perfect accuracy.
How does it work?
AI facial recognition tools rely on biometrics, or the biological characteristics that make each human being unique from one another. This starts with an AI face detector, an algorithm trained to recognize patterns in photos and videos and to locate human faces. After it’s located, the photo of the face is transformed through an algorithm into a set of digital information which can be compared to the pre-existing database.
This digital information is the biometrics at work. Face detectors can make measurements on facial features, looking at everything from the spacing of the eyes to the contour of one’s lips—each serving as the individual markers differentiating each human face. The miraculous thing about this technology is that it can be done in a matter of seconds and in dynamic and unstable environments.
How are businesses/government using these tools?
These incredibly powerful tools can have ramifications for our lives that many of us have yet to understand. For example, Clearview AI, a small tech start-up, has developed an AI face detector backed up by a database of more than 3 billion images. It allows users to upload a photo and, in return, to see public photos of that person, alongside links to where those photos originally appeared.
Even though there’s not a full understanding of how the software works, it has already been used by law enforcement officers to help solve crimes ranging from identity theft and shoplifting to murder and child exploitation. And, although these tools have been used for good, top tech experts are very clear in noting the potentially dangerous ramifications. Amnesty International, an organization for international human rights, has even launched a campaign to “ban dangerous facial recognition technology” for law enforcement due to its potential to violate the right to privacy and to threaten the rights of peaceful assembly and expression.
Introducing the Anonymizer
In the wake of these concerns about privacy and personal information on the internet, a number of AI tools have sprouted up. One of them, developed by a start-up known as Generated Media, has empowered users to upload photos of themselves and, in return, to receive a selection of AI-generated look-alikes who don’t actually exist. The company cites the Anonymizer as “a useful way to showcase the utility of synthetic media,” and users have been clamoring to get these new fake photos to use for all sorts of applications.
How does it work?
The Anonymizer’s proprietary AI software was trained on tens of thousands of photos taken at the Generated Media studio. These photos are then fed into two adversarial networks, which create completely new and unique photos by pitting two neural networks against one another. One of the networks is a generator that creates new samples, and the other is a discriminator that decides whether they look real.
It’s important to note that Generated Media has a few guidelines for how these photos can be used. Although it’s a completely free service for personal usage, it cannot be used for any commercial purpose. Additionally, these photos should not be used to impersonate another individual or for any illegal activity. However, after you have these photos, you’re free to use them on your own social media and anywhere else online where you may need a profile picture but want to protect your identity and individual privacy.
The Anonymizer is one of many exciting developments currently taking place in the field of AI and companies using AI facial recognition. Be sure to check out more on AI at discover.bot, including this recent article, The Ethics of AI.