Since its public debut Clearview Al, a facial recognition app, has gathered great attention. The app uses public photos from social media sites, including Twitter and Instagram, to create a facial recognition database. According to The New York Times, it allows one to use or upload a picture of someone to get access to public photos of said person and links to where the photos appeared, but the app carries risk because its ability to protect data remains untested.
On Feb.26 the app announced that it lost its entire client list to hackers. The company then said it was working to strengthen its security, NBC News reported. Clearview AI not only puts those searched at risk but is being called into question for selling its technology to countries with documented human rights abuses and companies in the private sector.
While the technology was allegedly meant to help law enforcement solve crime cases including sexual exploitation and identity theft, Clearview AI has reportedly been violating Apple developer program policies by offering government and private entities a preview of its services. As a result, Apple has suspended Clearview AI’s developer account for sharing a preview of a program meant only for developers. Amongst those the program was shared with are U.S. Immigration and Customs Enforcement (ICE), Macy’s, Walmart, and the NBA, BuzzFeed News reported. In addition, more than 600 law enforcement agencies started using Clearview AI in the past year, according to The New York Times.
Two U.S. senators, including Sen. Ed Markey of Massachusetts, sent a letter to the startup on Tuesday questioning its sharing of the application with countries like Saudi Arabia and the United Emirates. “Recent reports about Clearview potentially selling its technology to authoritarian regimes raise a number of concerns because you would risk enabling foreign governments to conduct mass surveillance and suppress their citizens,” Markey wrote. To date more than 2,200 private and public organizations worldwide and law enforcement agencies in 27 different counties have tried the app, BuzzFeed News reported.
“The weaponization possibilities of this are endless,” Eric Goldman said, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.” While law enforcement has historically used facial recognition tools for almost 20 years, those tools have been limited to searching through government-provided images including mug shots and driver’s license photos, according to The New York Times.
“Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse,” ACLU staff attorney Nathan Freed Wessler told BuzzFeed News. “More fundamentally, that so many law and immigration enforcement agencies were hoodwinked into using this error-prone and privacy-invading technology peddled by a company that can't even keep its client list secure further demonstrates why lawmakers must halt use of face recognition technology, as communities nationwide are demanding."
Facial recognition apps have repeatedly misidentified people of color at a rate much higher than white people, studies have shown. Depending on the search and algorithm used, a study found that “Asian and African American people were up to 100 times more likely to be misidentified than white men,” The Washington Post reported. In the past, many African Americans have been misidentified as suspects of crimes by facial recognition tools used by law enforcement. In 2018, Microsoft, IBM, and Amazon were called out for using facial recognition technology that was biased against people with darker skin tones. These companies failed to accurately identify people with darker skin, connecting members of Congress to criminals and not recognizing the faces of other people of color they depicted, a clear bias in the creation of technology.
Apps like Clearview AI present a threat not only to people of color but for anyone that it is used to search for. Women specifically are at greater risk with the ability of app holders to take a picture of a stranger they might find attractive to reveal not just their name, but other images and even where they live. In addition, at the hands of corruptive governments, apps like Clearview AI could be used to search for those to speak up against the government or act in protest creating potential human rights violations. While Clearview AI creators claim it is their “First Amendment right to public information,” social media sites including Twitter, Facebook, and YouTube have sent cease and desist letters in regards to the app using their platforms to collect data.