Why doesn't predictive policing work? It seems like it should. Essentially, the police enter data into a computer model. It can identify areas where crime happens and the frequency of those criminal actions, and then it can predict where future crime may occur. The police can then be dispatched to these areas so that they can either make arrests or so that their mere presence can prevent the crime from occurring.
But the problem with predictive policing is that it has been accused of being biased or even racist. This often seems very strange, because it was initially pitched as a way to get around inherent biases. A police officer can be biased against people of a certain religion or race, for example, but a computer cannot. So why is it that predictive policing still appears to have these problems?
Feedback loops
What you'll find is that the data being fed into the algorithm can create a feedback loop. After all, that algorithm has to be trained with data gathered by human officers. When they make an arrest, they enter that information into the system.
But what if the officers are already biased? If the officers tend to arrest people of a certain race at an abnormally high rate, the computer will assume that people of that race commit more crimes. It will then send more officers to that area, where they will make more arrests, and it will just reinforce this position. A small amount of bias at the beginning can throw off the whole algorithm so that the system doesn't work.
Modern policing often uses technology, but this can create some significant issues. Those who have been arrested need to know what legal defense options they have.
Comments
There are no comments for this post. Be the first and Add your Comment below.
Leave a Comment