
However, the good news is that a growing number of tools now let you stop facial recognition systems from scoping in on your personal photos. Take a look at the new digital push back.
FIGHTING BACK
Uploading personal photos to the internet can feel like letting go. Who else will have access to them, what will they do with them — and which machine-learning algorithms will they help train?
The company Clearview has already supplied US law enforcement agencies with a facial recognition tool trained on photos of millions of people scraped from the public web.
But that was likely just the start.
Anyone with basic coding skills can now develop facial recognition software, meaning there is more potential than ever to abuse the technology — in everything from sexual harassment and racial discrimination, to political oppression and religious persecution (as currently seen in China).
A number of AI researchers are pushing back and developing ways to make sure AIs can’t learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference.
“I don’t like people taking things from me that they’re not supposed to have,” says Emily Wenger at the University of Chicago, who developed one of the first tools to do this with her colleagues last summer: “I guess a lot of us had a similar idea at the same time.”
Data poisoning is one of the most recommended actions to take against this technology online. Things like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models.
“This technology can be used as a key by an individual to lock their data,” says Daniel Ma at Deakin University.
“It’s a new frontline defence for protecting people’s digital rights in the age of AI.”
But these efforts typically require collective action, with hundreds or thousands of people participating, to make an real impact on the companies. They mainly only focus on the individual.
Beyond your own personal efforts, talented individuals are now developing applications to help citizens fight back against the machine-learning capabilities of big tech.
HIDING IN PLAIN SIGHT
A variety of new online tools are available that make tiny changes to an image — mostly hard to spot with a human eye — to throw off an AI and cause it to misidentify who or what it sees in a photo.
This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes.
Fawkes is one example of this.
Give this program a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos.
Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans.
Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company, Megvii Technology.
In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in fresh images. The doctored training images had stopped the tools from forming an accurate representation of those people’s faces.
Fawkes has already been downloaded nearly half a million times from the project website.
One user has also built an online version, making it even easier for people to use.
There’s not yet a phone app, but there’s nothing stopping somebody from making one.
Researchers are also finding ways to project what will be needed as this technology advances further.
BEATING ADVANCING SYSTEMS
Fawkes may keep a new facial recognition system from recognising you — the next Clearview, say.
But it won’t sabotage existing systems that have been trained on your unprotected images already.
The tech is improving all the time, however. Wenger thinks that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR this week, might address this issue.
Called LowKey, the tool expands on Fawkes by applying perturbations to images based on a stronger kind of adversarial attack, which also fools pre-trained commercial models.
Like Fawkes, LowKey is also available online.
Ma and his colleagues have added an even bigger twist. Their approach, which turns images into what they call unlearnable examples, effectively makes an AI ignore your selfies entirely.
Unlearnable examples are not based on adversarial attacks. Instead of introducing changes to an image that force an AI to make a mistake, Ma’s team adds tiny changes that trick an AI into ignoring it during training. When presented with the image later, its evaluation of what’s in it will be no better than a random guess.
Fawkes trains a model to learn something wrong about you, and this tool trains a model to learn nothing about you. Both are great iniative in the digital age.
Unlearnable examples may prove more effective than adversarial attacks, since they cannot be trained against. The more adversarial examples an AI sees, the better it gets at recognising them.
Both are also prepared for Microsoft and others that may change their algorithms in the future, or if the AI may simply have seen so many images from people using these programs that it learns to recognise them.
The teams are constantly releasing updates to their tools.
“This is another cat-and-mouse arms race,” Wenger says.
This is the story of the internet. Companies like Clearview are capitalising on what they perceive to be freely available data and using it to do whatever they want.
Regulation might help in the long run, but that won’t stop companies from exploiting loopholes.
There’s always going to be a disconnect between what is legally acceptable and what people actually want. Tools like Fawkes and LowKey fill that gap, along with other online protection measures.
Let’s give people some power to fight back against these lurking systems.
KEEP UP-TO-DATE
For more TOTT News, follow us for exclusive content:
Facebook — Facebook.com/TOTTNews
YouTube — YouTube.com/TOTTNews
Instagram — Instagram.com/TOTTNews
Twitter — Twitter.com/EthanTOTT

1 thought on “How to stop AI recognising your face in photos”