Slowly but surely it seems the promises made by Big Tech that the answers to crime could be provided by mass surveillance and artificial intelligence have not delivered.
Until now, the only thing that has been stopping the rollout of this technology is raising awareness about how many of these programs have been kept completely secret from the public.
In the many articles that I’ve written about the various pre-crime systems, I’ve always included this key finding: “Predictive Algorithms Are No Better At Telling The Future Than A Crystal Ball.”
That title does not come from me or from other independent media “conspiracy theorists” — that comes from Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia. As an expert in the field who draws upon the findings and opinions of many other experts in artificial intelligence, he clearly states:
One of the fundamental flaws of predictive algorithms is their reliance on “inductive reasoning”. This is when we draw conclusions based on our knowledge of a small sample, and assume that those conclusions apply across the board.
He focuses on how this type of reasoning could gravely misinform business hiring practices, which is bad enough, but when we consider the even greater complexities of crime, conclusions become even more difficult to predict.
After many years of using predictive crime algorithms, some of the nation’s most enthusiastic supporters of “Policing in the 21st Century” are beginning to declare the mission a failure.
An admirably thorough article from The Los Angeles Times investigative reporter Mark Puente highlights the latest developments:
[T]he widely hailed tool the LAPD helped create has come under fire in the last 18 months, with numerous departments dumping the software because it did not help them reduce crime and essentially provided information already being gathered by officers patrolling the streets.
After three years, “we didn’t find it effective,” Palo Alto police spokeswoman Janine De la Vega said. “We didn’t get any value out of it. It didn’t help us solve crime.”
[…]
“We tested the software and eventually subscribed to the service for a few years, but ultimately the results were mixed and we discontinued the service in June 2018,” spokeswoman Katie Nelson said in a statement.
Beyond concerns from law enforcement, the data-driven programs are also under increasing scrutiny by privacy and civil liberties groups, which say the tactics result in heavier policing of black and Latino communities.
In March, the LAPD’s own internal audit concluded there were insufficient data to determine if the PredPol software — developed by a UCLA professor in conjunction with the LAPD — helped to reduce crime. LAPD Inspector General Mark Smith said there also were problems with a component of the program used to pinpoint the locations of some property crimes.
In response, Los Angeles Police Chief Michel Moore ended a controversial program intended to identify individuals most likely to commit violent crimes and announced he would modify others.
[…]
Some police leaders and academics expected predictive technology to revolutionize law enforcement by preempting criminal activity.
That didn’t happen.
In Rio Rancho, N.M. — a city of about 100,000 spread over 100 square miles — Police Capt. Andrew Rodriguez said PredPol was a disappointment.
For example, he said, it targeted a remote desert area as a hot zone for cars being stolen after thieves dumped one vehicle there. The department ultimately dropped the service.
“It never panned out,” said Rodriguez, who spent 11 years with the LAPD. “It didn’t really make much sense to us. It wasn’t telling us anything we didn’t know.”
Currently, 60 of the roughly 18,000 police departments across the United States use PredPol, MacDonald said, and most of those are smaller agencies with between 100 and 200 officers.
Read the entire article at The Los Angeles Times HERE.
One aspect the article does not address, however, are the numbers of people who might have had their rights compromised and their lives turned upside down by constant surveillance and harassment. Let’s hope a proper audit is done in order to assess that damage and add it to the technical failures. Chicago would be a great place to start, since its “Heat List” of potential criminals grew from 400 to 5,000 individuals without any explanation about how someone makes the list, how to get off of it, and whether it was at all effective in reducing crime.
Let’s hope that this news is a small but significant step in the right direction toward slowing the rollout of A.I. systems that are being sold as superior to our own intelligence and common sense. Whether it is faulty pre-crime algorithms, dubious facial recognition systems, or the growing use of social credit scoring, our over-reliance on technology to solve our societal problems is becoming clearer by the day.
Nicholas West writes for Activist Post. Support us at Patreon for as little as $1 per month. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter.
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Minds, Twitter, Steemit, and SoMee.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Image credit: Phys.org
Be the first to comment on "Police Departments Begin Dumping Faulty Pre-Crime A.I. Programs"