ILLUMINATING THE RICH AND VARIED LIFE OF NEW YORK CITY

 

 

 

Advocates Warn Against Racial Bias of ‘Digital Stop-and-Frisk’

As Mayor Eric Adams plans to expand facial recognition technology to fight rising crime, new research suggests neighborhoods of color are being disproportionately surveilled

The report suggests a correlation between surveilance cameras and stop-and-frisk locations. Credit: Niamh Rowe (March 3 2022).

Crime in New York City is rising. The first month of 2022 saw a 38.5% year-on-year jump, with citywide shooting incidents up by a third. While Mayor Eric Adams has proposed the expansion of facial recognition technology as a solution, new research has substantiated long-held concerns about racial bias associated with the technology.

In a report published by Amnesty International as part of its Ban the Scan project, researchers have layered camera placement data, with statistics of stop-and-frisk locations, to demonstrate the high incidence of “digital stop-and-frisk” in Black and brown neighborhoods. The camera placement data was crowdsourced by over 15,000 digital volunteers on Google Maps.

“We have long known that stop-and-frisk in New York is a racist policing tactic,” said Matt Mahmoudi, artificial intelligence and human rights researcher at Amnesty International, who led the report. “We now know that the communities most targeted with stop-and-frisk are also at greater risk of discriminatory policing through invasive surveillance.”

East Brooklyn neighborhood has the highest concentration of surveillance. Credit: Amnesty International.

The report offers a “conservative yet comprehensive view” of where the New York Police Department (NYPD) cameras operate across the city, so one can know the minimum amount of surveillance they are exposed to when walking down any given street. It also shows how non-white New Yorkers have greater exposure to facial recognition software. For instance, 577 cameras were found at intersections in East New York in Brooklyn, making it the most surveilled neighborhood. The area is 54.4% Black and 30% Hispanic, according to the latest census data.

In a statement put out on Jan. 24, as part of his Blueprint to End Gun Violence in New York City, Mayor Adams said the police department will “explore the responsible use” of technology to fight crime. “From facial recognition technology to new tools that can spot those carrying weapons, we will use every available method to keep our people safe,” he said.

However, Adams outlined that it “will not be the sole means to make arrests,” but will be part of a larger case-building effort. In response to questions raised about the technology’s responsible use, his spokesperson Fabien Levy said the city would not use any software that results in biases against certain races or genders.

Public cameras overlayed with stop-and-frisk locations. Credit: Amnesty International.

Between 2016 and 2019, NYPD conducted over 22,000 facial recognition searches. The technology works one-to-one, directly comparing two images for a match, or on a one-to-many basis, comparing a human face from a digital image or a video frame from CCTV cameras, against a database of faces from “lawfully possessed arrest photos,” the NYPD claim.

But advocates argue that it isn’t effective at bringing down crime.

“We’re being wrongly told that we need to abrogate our rights and put our neighbors in harm’s way just to reduce crime,” said Albert Fox Cohen of the Surveillance Technology Oversight Project (STOP). “And once again, it’s simply made up; there’s no evidence to back up the claim.”

Mahmoudi agrees. Asked if there is any independent research that facial recognition is effective as a crime deterrent measure, he responded in the negative. “This is always the NYPD playbook. They prey on our worst fears to get permission for these powerful systems and then end up using it to target poverty,” Cohen said.

Precisely how the NYPD identifies faces is unclear, as the full extent of their business with third-party vendors like Clearview AI and DataWorks is not publicly available. This is a major sticking point amongst advocates like Mahmoudi and Cohen who are calling for greater transparency. STOP and Amnesty International have even separately filed lawsuits against the Metropolitan Transportation Authority and NYPD for refusing to produce documents about the use and accuracy of surveillance cameras.

Credit: Amnesty International.

Clearview AI and DataWorks scrape images from the internet and social media to give their clients — typically law enforcement — unparalleled access to peer into the lives of the people whose faces they pick up. “In theory, if we allowed the police to search anyone’s home whenever they felt like it, maybe they would find more crimes and arrest more people,” said Jerome Greco, a public defender with the Legal Aid Society. “But the core principles of our country say this is not acceptable.”

His skepticism towards NYPD surveillance not only stems from its racially patterned employment, but Orwellian fears about individual sovereignty in the digital age. “People should be able to walk out their door without fear of being identified and cataloged by law enforcement,” he added.

In 2019, New Yorker John McPherson filed a class action suit against Clearview for scraping images of him online without his consent, which were then sold to the NYPD. “Clearview misappropriated the value of his photograph, likeness, and biometric information and identifiers,” the suit states. “Consequently, Clearview has unlawfully profited therefrom.”

Credit: Niamh Rowe for NY City Lens (March 3 2022).

Last year the society published documents that offer a glimpse of the extent to which the NYPD has used Clearview AI on New Yorkers. Obtained through a Freedom of Information Law request, the emails show that 50 members of the NYPD had access to or an account with Clearview AI between October 2018 and January 2020 — the time period the records cover.

“Glad we can help you guys out, it’s all exciting,” company executive Hoan Ton-Than said in an email to officer Tom A. Markiewicz in 2018. Public records show that Markiewicz has had 15 allegations against him, including two substantiated allegations for “Question and/or stop” and “Frisk.” Markiewicz ended his service in December 2019, the reasons for which are unknown. “Look forward to working with you guys on the private sector side,” said Lieutenant Gregory Besson to Clearview’s co-founder, in another email.

Advocates also caution against the technology due to racial bias.

In a 2019 report of the National Institute of Standards and Technology (NIST), for one-to-one matching, false positives for Asian and African American faces relative to images of Caucasians, were between 10 and 100 times more frequent, depending on the individual algorithm.

But another more recent federal study by NIST in 2021 suggests that Clearview AI may be one of the stronger facial recognition algorithms, ranking in the top 10 of almost 100 vendors in a one-to-many accuracy test. “Face recognition has undergone an industrial revolution, with algorithms increasingly tolerant of poorly illuminated and other low-quality images, and poorly posed subjects,” the report states.

Methodology used by over 15,000 digital volunteers. Credit: Amnesty International.

But Cohen is skeptical of the findings. “Even if the underlying algorithm improves, that doesn’t mean that the systems will improve under real life conditions, when used to analyze the typically low-quality photos captured at a crime scene,” he said. Additionally, “even when an algorithm works well, which it often doesn’t, there are numerous steps that can introduce human error and bias into the system.”

Greco agreed that the conditions of a lab test don’t replicate reality as the images used in the experiment aren’t commonly used in investigations. The cameras are rarely right at face level, so the stills are often from difficult angles, he pointed out.

“Irrespective of whether facial recognition is accurate at identifying Black faces, you’re dealing with an de facto environment in which institutional discrimination that happens at the policing level is still going to play out in facial recognition,” Mahmoudi said. “It is still technology that’s de facto incompatible with international human rights law, but also with large parts of the American Constitution, including the first amendment, right?”

Share