Clearview AI software is based on a simple concept. We submit a photo of a face, whatever the angle, lighting, whether wearing a mask or glasses, and its search engine restores the identity of the person with their contact details. Its database contains 32 billion images and is enriched with 75 million new faces every day. The company, which has remained under the radar until now, experienced a spike in notoriety in September with the publication of the book Your Face Belongs to Us (“Your Face Belongs to Us”, Random House, untranslated), from Kashmir Hilljournalist at New York Times. His investigation reveals both the power of the tool and the practices of the company, ready to do anything to protect itself.
After unsuccessful attempts to trace the founders – which she eventually managed to do – Kashmir Hill identified a duly registered private investigator, therefore authorized to use Clearview AI. The man, based in Texas, agrees to help him in his research on the condition of remaining anonymous. He tests the service by entering a photo of the journalist into the Clearview database. Surprise: nothing comes up. It’s impossible, she tells herself, even though she knows that, voluntarily or not, she left her share of photos on the networks. A few minutes later, the detective receives a call: “Hello, I’m Marko, from Clearview AI technical support. I have a question: why did you submit the photo of a journalist from the New York Times ? – I did this ? – Yes, this woman from New York Times, Kashmir Hill. You know her ? – Not at all, I’m in Texas, how could I?”
The Clearview representative tells him that searching for journalists violates the base’s terms of use and, to his amazement, immediately cuts off his access.
Not only had Clearview AI made all data on the journalist inaccessible, but its technicians appear to have set an alert to detect searches on her name, a way to monitor the progress of its investigations. This discovery chilled the young woman’s blood. What else is Clearview AI capable of? Answer: many things. For example, collecting faces from “Russian Facebook”, VKontakte, and providing Ukrainian forces with the opportunity to identify all victims in the ranks of the Russian army. “As soon as Clearview is used to identify the bodies of Russian soldiers to send photos to their loved ones, we are in absolute dystopia,” she confides to L’Express.
Identify people in the background
The power of Clearview AI is dizzying. The application is able to identify with great precision a face that is in the background of a photograph, which allows identification within large groups of people. A dream of dictatorship. Moreover, the Hungarian government of Viktor Orban purchased the services of Clearview AI, which is enough to frighten opponents. We can imagine the risks if totalitarian regimes adopted this technology… Unlike Iran or North Korea, many dictatorships are not subject to restrictions on the export of sensitive technologies.
For the moment, the use of Clearview AI is reserved for governments, police forces or entities that are able to demonstrate having any security mission. Even this last provision opens the door to spectacular abuses. One of the most telling examples brought up by Kashmir Hill was when she wanted to enter Madison Square Garden in New York to watch a hockey match. She tells the anecdote: she was accompanied by a lawyer friend working in a firm which, among its countless files, was managing a dispute with the famous indoor stadium in Manhattan. “We had barely passed through the security gate,” she says, still stunned, “when two agents asked us for our identification before asking us to leave the premises.” Clearview AI user, Madison Square Garden had decided to prohibit access to any person – from partner to the administrative employee – working for a law firm involved in a dispute with the stadium.
If, by chance, this technology were made available to the public, the societal consequences would be frightening. In a country like the United States, where image rights are much less protected than in France, a woman photographed in a public place could be identified by a harasser; in a demonstration or rally, the participants – opponents of abortion, trade unionists, civil rights defenders, activists of all kinds – could be registered en masse.
Even journalistic practices are affected by the use of Clearview AI. “In the past, when you encountered a sensitive source, you were careful to leave your phone at home [pour ne pas être géolocalisé], notes Kashmir Hill. Today, you will have to avoid all places where there may be surveillance cameras.”
One of the most astonishing aspects of the book and the interview with Kashmir Hill is the profile of the principal founder of Clearview AI, a dangerously immature thirty-something Australian named Hoan Ton-That, who has not implemented a single ethics committee. The portrait that the journalist paints is more akin to that of a totally out of place geek than a responsible entrepreneur.
Fines and bans
Hoan Ton-That’s financiers, however, are responsible for supervising it with “lawyers, whose main mission is to prevent the company from succumbing under the weight of fines”, summarizes Kashmir Hill. In total, Clearview AI faces at least $70 million in potential fines, a slight problem for a company that has only raised $38 million.
The company cast a wide net to build its base, brilliantly ignoring national regulations, such as the European General Data Protection Regulation (GDPR). Banned in Canada, it was sentenced to a heavy penalty in the United Kingdom and is the subject of a formal notice in Australia. In France, the CNIL imposed a sanction of 20 million euros.
The only ones able to cut off the oxygen to Clearview AI would be social networks, which have, unwittingly, largely contributed to its power. For now, Meta has limited itself to sending a simple formal notice to stop collecting images. The others didn’t move. Why this shyness? Kashmir Hill surveyed his contacts at Meta, Twitter (now X), LinkedIn, Venmo (mobile payment service), etc. “Many are obviously upset [par cette inaction]. But they believe a trial would draw even more public attention to the volume of images accumulated. They know that they have in some way betrayed their users with these practices… So, they don’t want to add to it. And then I think that these giants would simply be dismissed for their actions, to the extent that a federal court ruled that everything on the Internet was de facto public…”
The dam that protects the general public from the danger of Clearview AI is fragile. The company is able to sell its facial recognition algorithm if it separates it from the database of 32 billion faces. It is therefore possible for any entity to build its own image database by “scraping” (“extracting data”) from websites and organizing it with the Clearview AI system. Note that other major tech players have similar technological capabilities, starting with behemoths like Meta, Google or, more troublesome due to its extraterritoriality, TikTok. For now, these companies are careful to keep them to themselves. Will they do this forever?