
What if a stranger could snap your picture on the sidewalk, then use an app to quickly discover your name, address and other details?
Clearview AI, a startup founded by an Australian-born tech entrepreneur, is making that possible with a controversial app being used by hundreds of law enforcement agencies across the world.
CLEARVIEW AI: THE STORY
You may have never heard of Clearview AI, but there’s a good chance that you are already a part of the private company’s massive facial recognition database
The New York Times has detailed how the firm’s facial recognition program has scraped sources including from all over the internet to build its vast directory containing 3 billion images.
Clearview AI — which is used by law enforcement agencies such as the FBI — has created a storm of controversy with the development of the draconian app.
The system’s backbone is a database that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites, with critics saying it goes far beyond anything ever constructed by governments or Silicon Valley giants.
The size of the Clearview database dwarfs others in use by law enforcement. For example, the FBI’s own database, which taps passport and driver’s license photos, is one of the largest, but only has just over 641 million images of US citizens.
More than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list.
Privacy experts say the technology has the potential to alter privacy as we know it, citing that use of this app will change the relationship between the people who surveil and those who are surveilled.
Twitter has sent a cease and desist letter, noting that the firm has violated the social network’s policies, while a lawsuit has been filed alleging that the firm’s actions are a threat to civil liberties.
“The weaponization possibilities of this are endless,” said Eric Goldman, Co-Director of the High Tech Law Institute at Santa Clara University:
“Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
If you want your image removed from Clearview’s database, you have to provide a headshot and a photo of your government issued ID.
Interestingly, the company that has been described as “The Secretive Company That Might End Freedom as We Know It” is produced from the mind of an Australia-grown tech entrepreneur.
AUSTRALIAN INVOLVEMENT

The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented reality glasses. This would allow users to potentially be able to identify every person they saw in the streets.
This statement has drawn the interest of Australian Privacy Commissioner, Angelene Falk, who said she wanted to know whether the data of Australians had been collected.
“It has caught my regulatory attention and I am making inquiries of Clearview AI to ascertain whether or not Australians’ data is implicated,” she told RN Breakfast.
The controversial app was created by Mr. Ton-That, who grew up in Australia and moved to the US at 19 years old. He worked in app development before founding Clearview AI four years ago.
Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology.
When I began looking into the company, its website was a bare page showing a non-existent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good ” — who turned out to be Ton-That using a fake name.
In addition to Ton-That, Clearview was founded by Richard Schwartz and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.
Ton-That claims police in Australia are using his technology, but would not specify which police departments across the country currently used Clearview AI:
“We have a few customers in Australia who are piloting the tool, especially around child exploitation cases.”
Ton-That said he was confident the technology was not being misused, but growing concerns continue to mount given Australia’s push towards a surveillance state over the last two decades.
DIGITAL TYRANNY
Clearview has quickly risen to the forefront of the conversation around facial recognition — in particular, growing concern among activists and politicians over how it may be used to violate civil rights and whether it’s being adopted too quickly based on misleading claims about effectiveness.
Amazon, which makes a cloud-based facial recognition product called Rekognition, has also faced similar criticism for selling its technology to law enforcement, despite repeated concerns from academics and activists who say it is flawed when used to try to identity certain individuals.
Facial recognition technology has always been controversial, but Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.
In one particularly dystopian twist, it was also reported that Clearview has identified and reached out to police officers who may have been talking with journalists by checking logs of which officers uploaded photos of those journalists into Clearview’s app.
“It’s extremely troubling that this company may have monitored usage specifically to tamp down on questions from journalists about the legality of their app,” a US Senator tweeted last Sunday.
Monique Mann, from the Australian Privacy Foundation, said people had a right to be concerned if their biometrics were harvested from social media channels.
“Their sensitive personal information — biometric information is sensitive information — has been taken without their knowledge or their consent, and it’s been put to use for applications that they were not aware of and they certainly haven’t agreed to,” Dr. Mann said.
If sensitive biometric data of Australians had been collected, Clearview AI would have to abide by Australia’s privacy act, which would require informed consent.
In a blog post published on Thursday responding to criticism, Clearview claims it has rejected the idea it produce a public, consumer-facing facial recognition app that could be accessed by anyone.
Could this app underpin Australia’s emerging ‘Social Credit’ biometric landscape? Only time will tell.
Thank you to Full Member Londonreal for the feature pitch!
Members of our like-minded community have the ability to pitch or submit content on the website.
Get involved with us and help support our operation by signing up.
Learn more about additional benefits by clicking here.
RELATED CONTENT
The Secretive Company That Might End Freedom as We Know It | The New York Times
Class-action lawsuit filed against controversial Clearview AI startup | ZDNet
The Australian behind Clearview AI, a facial recognition software, says it is being used here | ABC News
Scraping the Web Is a Powerful Tool. Clearview AI Abused It | Wired.com
Australia: The Biometric Surveillance State | TOTT News
Australia: The Road to Digital Tyranny | TOTT News
KEEP UP-TO-DATE
For more TOTT News, follow us for exclusive content:
Facebook — Facebook.com/TOTTNews
YouTube — YouTube.com/TOTTNews
Instagram — Instagram.com/TOTTNews
Twitter — Twitter.com/EthanTOTT

Privacy left us a long time ago, the regulatory framework can easily be manipulated, for example- key words such as security , safety, terrorism, can and used as mediums and most of the time most would completely unaware. When considers who runs all of this so called media and information regime.
The market is for rich pickings, sinister in business, well!! No barrier to full exploitation Clearview, like others of this ilk , see the needs and fill the space, as it is a certainty that the demand for this first instance information is voracious.