Predicted Policing & AI

Why poor results to date? Because the failed effort were done 2020 & 2012 prior to AI. The programming was done by humans influenced by biased/racist police.

. As summarized by the Leadership Council Education Fund:

In 1999, Blacks and Latinos made up 50 percent of New York’s population, but accounted for 84 percent of the city’s stops. Those statistics have changed little in more than a decade. According to the court’s opinion, between 2004 and 2012, the New York Police Department made 4.4 million stops under the citywide policy. More than 80 percent of those stopped were Black and Latino people. The likelihood a stop of an African-American New Yorker yielded a weapon was half that of White New Yorkers stopped, and the likelihood of finding contraband on an African American who was stopped was one-third that of White New Yorkers stopped.

Wouldn’t AI take the above factoids in account in directing where more policing should be concentrated without biased &/or racism? More fair and impartial olicing?
I wouldn’t want to release AI predictive policing without human oversight though.

2 Likes

I would expect AI-directed policing to use facial recognition and prior criminal history to weigh heavily on stops.
Wendy

2 Likes

I’m not entirely sure. But, we know solutions to the crime problem. It’s better public schooling, better health, etc. But we don’t do that here.

Here the solution to crime always involves punishment. And this is just using AI to try and make that punishment happen with less effort.

4 Likes

Seems like a better solution would be to simply use precogs.

1 Like