A black man was wrongfully jailed for a week after a face recognition error, Ars Technica writes.
With an increasing number of cities and law enforcement deploying ‘smart tech’ such as AI based facial recognition software, the risk is that existing divisions and inequalities are exacerbated.

Police in Louisiana reportedly relied on an incorrect facial recognition match to secure warrants to arrest a Black man for thefts he did not commit.
Randal Reid, 28, was in jail for almost a week after the false match led to his arrest, according to a report published Monday on NOLA.com, the website of the Times-Picayune/New Orleans Advocate newspaper.
It’s not clear exactly what facial recognition was used in this case. In previous cases, Jefferson Parish Sheriff Joe Lopinto’s office requested facial recognition analyses through the Louisiana State Analytic and Fusion Exchange in Baton Rouge, which uses Clearview AI and MorphoTrak systems, the report said.
Clearview software compares faces to pictures on social media and many other sources. “Our platform, powered by facial recognition technology, includes the largest known database of 30+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources,” the company’s website says.
Interestingly, at an AI conference that I attended, one of the speakers told about the use of AI by the Dutch police. According to this speaker, police would only follow AI advice when it confirms their own intuitions and experiences, not when it is at odds with it. But perhaps this is another context? In any case, it would be ready to just blame the technology as not working perfectly (yet), or even as ‘racist technology’. However, the underlying issue is how commercial and proprietary smart urban tech is adopted embedded into an organization that is supposed to serve public needs, and how this in practice mirrors and enlarges issues of inequality in society at large.
Source to original article on Ars >>