Why poor results to date? Because the failed effort were done 2020 & 2012 prior to AI. The programming was done by humans influenced by biased/racist police.
. As summarized by the Leadership Council Education Fund:
In 1999, Blacks and Latinos made up 50 percent of New York’s population, but accounted for 84 percent of the city’s stops. Those statistics have changed little in more than a decade. According to the court’s opinion, between 2004 and 2012, the New York Police Department made 4.4 million stops under the citywide policy. More than 80 percent of those stopped were Black and Latino people. The likelihood a stop of an African-American New Yorker yielded a weapon was half that of White New Yorkers stopped, and the likelihood of finding contraband on an African American who was stopped was one-third that of White New Yorkers stopped.
Wouldn’t AI take the above factoids in account in directing where more policing should be concentrated without biased &/or racism? More fair and impartial olicing?
I wouldn’t want to release AI predictive policing without human oversight though.