Sam Biddle, writing for The Intercept, gives a terrifying account of how governments— especially law enforcement and intelligence agencies— and corporations are employing junk science in their efforts to find the technological magic elixir to ‘spot the perps’:
FACIAL RECOGNITION has quickly shifted from techno-novelty to fact of life for many, with millions around the world at least willing to put up with their faces scanned by software at the airport, their iPhones, or Facebook’s server farms. But researchers at New York University’s AI Now Institute have issued a strong warning against not only ubiquitous facial recognition, but its more sinister cousin: so-called affect recognition, technology that claims it can find hidden meaning in the shape of your nose, the contours of your mouth, and the way you smile. If that sounds like something dredged up from the 19th century, that’s because it sort of is…
AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document [AI Now’s 2018 report] particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.
That’s because “affect recognition,” the report explains, is little more than the computerization of physiognomy, a thoroughly disgraced and debunked strain of pseudoscience from another era that claimed a person’s character could be discerned from their bodies — and their faces, in particular. There was no reason to believe this was true in the 1880s, when figures like the discredited Italian criminologist Cesare Lombroso promoted the theory, and there’s even less reason to believe it today. Still, it’s an attractive idea, despite its lack of grounding in any science, and data-centric firms have leapt at the opportunity to not only put names to faces, but to ascribe entire behavior patterns and predictions to some invisible relationship between your eyebrow and nose that can only be deciphered through the eye of a computer. Two years ago, students at a Shanghai university published a report detailing what they claimed to be a machine learning method for determining criminality based on facial features alone. The paper was widely criticized, including by AI Now’s Kate Crawford, who told The Intercept it constituted “literal phrenology … just using modern tools of supervised machine learning instead of calipers.”
No amount of technological wizardry can overcome faulty assumptions about human psychology:
“Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads. “The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.”…
As with any computerized system of automatic, invisible judgment and decision-making, the potential to be wrongly classified, flagged, or tagged is immense with affect recognition, particularly given its thin scientific basis: “How would a person profiled by these systems contest the result?,” Crawford added. “What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings? Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”
Among the problems with the basic premises behind ‘affect recognition’ technology are:
a) there are marked cultural differences in how individuals react to emotional stimuli and display emotion;
b) there are marked individual differences in how people react to emotional stimuli and display emotion1;
c) there are substantial differences in how individuals interpret emotional displays of people within their culture from those outside of it (see also here); and, not surprisingly, given a, b and c:
d) artificial intelligence and machine learning programs have consistently taken on the prejudices of their developers and the insular culture of computer science.
In short, the whole premise that the motives and/or emotional state (for example, ‘guilty conscience’, or ‘hostile intent’) of any individual could conceivably be determined with any accuracy or reliability by any means is not merely false, it is patently absurd.
Even a supposedly less complex task than identifying an emotional state, simple identification of a person— matching a face to a name- is beyond the capability of AI systems:
Amazon’s face surveillance technology is the target of growing opposition nationwide, and today, there are 28 more causes for concern. In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
The pattern of false positives with this technology should surprise no one:
The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.).
How poorly does the best system function? Stunningly so:
Facial recognition is not just useless. In police hands, it is dangerous.
Now facial recognition is here for real. The police are scanning thousands of our faces – at protests, football matches, music festivals and even Remembrance Day commemorations – and comparing them against secret databases.
The only difference is that in the books and the films it always worked. Yesterday, Big Brother Watch published the results of its investigation into police use of facial recognition software. It revealed that the Met’s technology is 98% inaccurate.
This hasn’t come as a big surprise to us at Liberty. When we were invited to witness the Met’s trial of the technology at Notting Hill carnival last summer, we saw a young woman being matched with a balding man on the police database. Across the Atlantic, the FBI’s facial recognition algorithm regularly misidentifies women and people of colour. This technology heralds a grave risk of injustice by misidentification, and puts each and every one of us in a perpetual police lineup.
More frightening, the computer scientists who develop these systems, and the agencies that use them, overestimate the accuracy of the technology, and their own judgement:
On May 10th, 2018, NPR asked if the U.S. would invest in real-time facial recognition.
The article asserts that “Facial recognition accuracy rates have jumped dramatically in the last couple of years, making it feasible to monitor live video.” The author links to coverage of a Chinese breakthrough in the technology.
But most other sources on facial recognition software signal a bleak outlook.
London cops have made zero arrests with only two correct matches for a false positive rate of 98%. That means that innocent people are being falsely identified as criminals. (emphasis added)
Not only does this technology not work in the way proponents (especially those that profit from it) suggest— it cannot work.
It can only perpetuate systems of injustice.
1. Of particular importance on this regard: “Although there is some consistency in emotional responses to certain situations, there is also considerable variability among cultures in the prevalence and salience of different kinds of emotional experience, in emotional reactions to particular situations (Mesquita & Ellsworth, 2001;Mesquita & Frijda, 1992), and in the way emotions are described (Wierzbicka, 1999). There is as much or more variability among individuals within the same culture: Some people are austere, others volatile; some fear dogs,others love them; some are enraged by adolescent excesses,others amused. Focusing on the situations that elicit the same emotion among all members of the species distracts us from the overwhelming preponderance of situations that do not.” (Nesse and Ellsworth, 2009, pg. 134, emphasis added, link to original above)