The only positive reason to wear a mask?
The COVID-19 pandemic, through mass compliance and social engineering, has made wearing face masks a habitual practice. Calls are even being made right now in mid-2022 for a return of masks to many settings.
Initially, when masks first were launched as a ‘preventative infection measure’ across the world, the wearing of these products actually hampered many facial recognition systems in use.
At the time, I mentioned that this could be the only positive to wearing masks on your face.
The question was asked: Could this be used to help the ‘undesirables’ blend into the emerging tech dystopia?
Instead of using the mask as a symbol of ritualistic submission, as most were doing out of fear, was there a way this could benefit free thinkers in certain settings as an empowering privacy measure?
Well, this optimism didn’t last long.
With time, the technology evolved and adapted to accurately identify individuals wearing medical and other forms of masks over their faces. It seems the Polyergus were not happy with this new development.
Deep learning-based facial recognition (FR) models have since demonstrated state-of-the-art performance in the past few years, even when wearing ‘protective’ face masks became commonplace.
‘That does it!’, I muttered at the time.
‘There are now officially no positive reasons to wear a mask.’
Continuing to breath my oxygen clearly over the next two years, I thought nothing more of it, but still tried to keep up-to-date on the ways researchers are looking to challenge or beat facial recognition systems.
And, as we have learned time and time again, if there is a good enough incentive, people will always find new ways to achieve their intended goal.
Now, it seems, after all this time, the masks have got ahead of the cameras once again!
Earlier this month, researchers from Ben-Gurion University of the Negev and Tel Aviv University published a revised report on previous efforts to foil the technology.
In this particular case, the researchers decided to find out whether they could create a specific pattern/mask that would work against modern deep learning-based FR models.
Their attempt was successful: they used a “gradient-based optimisation process” to create a universal perturbation (and mask) that would falsely classify each wearer – no matter whether male or female – as an unknown identity, and would do so even when faced with different FR models.
Participants were asked to walk down a corridor while wearing various control masks, such as a blue surgical mask and masks printed with realistic human features (of the wearer’s sex and opposite sex).
A short video shows the process. In all scenarios, the participants’ faces were detected and recognised.
However, when wearing a mask with this new pattern, the cameras were stumped in recognition efforts:
This mask works as intended both when printed on paper or fabric, researchers say.
But, even more importantly, the mask will not raise suspicion in our post-COVID ‘new normal’ world and can easily be applied/removed when the individual needs to blend in real-world scenarios.
“In this paper, we propose Adversarial Mask, a physical universal adversarial perturbation (UAP) against state-of-theart FR models that is applied on face masks in the form of a carefully crafted pattern.
In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets. In addition, we validated our adversarial mask’s effectiveness in real-world experiments (CCTV use case) by printing the adversarial pattern on a fabric face mask.
In these experiments, the FR system was only able to identify 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks).”
The pattern used looks a little like the lower half of Cinco de Mayo skull designs, but with bright colours against the skin tone. Pixels have been distorted on the design to cause the cameras more pain.
Cameras have developed to be able to recognise face paint, and even normal masks, while anything that does completely cover your features will naturally set off their detectors as a ‘suspicious individual’.
Balaclavas and the like won’t work. You just look like an out-of-place criminal and will attack more attention.
What is needed to avoid detection from cameras is something that blends you in naturally to the environment around you, and perhaps a design like this on a non-suspicious (now common) item like masks is an avenue.
Either way, the ball is now back into the court of the facial recognition designers to try and combat this now.
And is good news for privacy-advocated that are looking for ways to protect their identity in the physical realm, just as they have been working to protect in the digital space for years now.
While researchers are trying to beat cameras in the real world, this is a natural extension of work that has been ongoing in the online realm to beat emerging, unregulated FR technologies.
Adversarial attacks in the computer vision domain have gained a lot of interest in recent years, and various ways of fooling image classifiers and object detectors have been proposed.
Attacks against FR systems have also been shown to be effective. For example, research has demonstrated
that face synthesis in the digital domain can be used to fool FR models.
In a world where your digital footprint is perhaps even more revealing than your physical footprint, and data is being sold across the world, it is important for all individuals to protect their identities online.
A variety of new online tools are available that allow you to make tiny changes to an image — mostly hard to spot with a human eye — to throw off an AI and cause it to misidentify who or what it sees in a photo.
This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes. As we have just discussed.
Fawkes is one example of this for the digital space.
Give this program a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos.
Fawkes has already been downloaded nearly half a million times from the project website.
One user has also built an online version, making it even easier for people to use.
Another system is called LowKey, the tool expands on Fawkes by applying perturbations to images based on a stronger kind of adversarial attack, which also fools pre-trained commercial models.
Like Fawkes, LowKey is also available online.
Their approach, which turns images into what they call unlearnable examples, effectively makes an AI ignore your selfies entirely.
Whether you are online or trying to live in a world of Big Brother spying, it is essential we all learn the correct tools to protect our identities from prying eyes.
For more TOTT News, follow us for exclusive content:
Facebook — Facebook.com/TOTTNews
YouTube — YouTube.com/TOTTNews
Instagram — Instagram.com/TOTTNews
Twitter — Twitter.com/EthanTOTT