Proponents of predictive policing support data-driven analysis. They believe that studying the data can give police officers information about where crime may happen. The police can then take steps to prevent it, or they can at least react quickly when something happens so that they can make an arrest.
One of the reasons that this predictive policing system was said to be a step forward when the technology was developed is because it couldn't be biased. The computer wasn't going to hold any sort of racial prejudice, for example, even though it's well-known that actual police officers sometimes do. This seems like it would make things fair, but that hasn't happened. Why not?
Trouble with the data itself
The big problem is that the algorithm decides where it thinks crime will happen. But that algorithm still has to be trained and it still has to be given information to consider. Human police officers are the ones who are providing this information. As a result, any bias that they have can be reflected and even amplified by the algorithm.
An example of this would be a police officer who is biased against a certain ethnic group and arrests them more than anyone else. That officer then inputs this biased data into the algorithm, and the computer determines that these individuals, in these specific areas, are more likely to commit crimes. It then starts redirecting police resources to these areas, leading to more arrests, which seems to back up the data. But it's really just a loop that is feeding into itself.
Unfortunately, this shows how hard it is to avoid things like bias when it comes to police work. Those who believe they may have been unfairly accused or arrested need to be sure they understand all of their legal defense options.
Comments
There are no comments for this post. Be the first and Add your Comment below.
Leave a Comment