Too often, a facial-recognition search represents virtually the entirety of a police investigation. Illustration by Ibrahim Rayintakath If you’ve watched just about any crime show, the scenario will be familiar: investigators spot someone on surveillance footage and, with a few clicks on a keyboard, they quickly have his name, address, and rap sheet, and are off racing to apprehend him. Actual police work is far more deliberate—or, at least, it is supposed to be. As Eyal Press reports in an alarming investigation, published as part of this week’s A.I. Issue, the use of facial-recognition software, marketed by a patchwork of private companies, has become an increasingly common policing technique across the United States. In some cases, it is producing the only evidence that leads to a suspect’s arrest. The problem is that the technology doesn’t work like it does on TV. “The typical search generates not just a single face but, rather, a ‘candidate list’ of dozens, sometimes hundreds, of potential matches,” Press explains. “Most of the returns, in other words, are false positives. The task of reviewing the list and deciding which, if any, candidate might be a correct match falls not to an algorithm but to a person.” That might sound like a good thing, a human check against the machine. But critics have identified flaws in the photo sets being used, problems with training and technical fluency among investigators, and examples of cognitive bias, in which the police are emboldened to make decisions on cases based on what often only feels like hard data. This, as Press explores, has led to a growing list of people who have had their lives upended, and been arrested for crimes to which they had no connection at all. Support The New Yorker’s award-winning journalism. Subscribe today » |
No comments:
Post a Comment