AI predicts crime a week ahead with 90 percent accuracy, but can also perpetuate racist bias

RoboCop may be getting a 21st century reboot as an algorithm has been found to predict the future a crime a week ahead with 90 percent accuracy.

The artificial intelligence (AI) tool predicts crime by studying patterns in time and geographic location of violent and property crimes.

Data scientists from the University of Chicago trained a computer model using publicly available data from eight major US cities.

However, this has proven controversial as the model does not take into account the systemic biases in law enforcement and its complex relationship with crime and society.

Such systems have been shown in the past to perpetuate racist prejudice in the police, which could potentially be replicated by this model.

But the researchers argue that their model can also be used to detect bias and should only be used to inform current police strategies.

For example, it has been found that socioeconomically disadvantaged areas may receive disproportionately less police attention than wealthier areas.

A new artificial intelligence (AI) tool developed by scientists in Chicago, USA, predicts crime by studying patterns in time and geographic location of violent and property crimes.

Violent crimes (left) and property crimes (right) reported in Chicago for the two-week period April 1-15, 2017.  These incidents were used to train the computer model.

Violent crimes (left) and property crimes (right) reported in Chicago for the two-week period April 1-15, 2017. These incidents were used to train the computer model.

Prediction accuracy of violent (left) and property crime (right) models in Chicago.  The prediction is made one week ahead and the event is registered as a successful prediction if the crime is registered within ± one day of the predicted date.

Prediction accuracy of violent (left) and property crime (right) models in Chicago. The prediction is made one week ahead and the event is registered as a successful prediction if the crime is registered within ± one day of the predicted date.

HOW DOES AI WORK?

The model was trained on historical data on crime incidents in Chicago from 2014 to the end of 2016.

He then predicted the crime rate for the weeks following the training period.

The incidents he trained on were either violent crimes or crimes against property.

It takes into account the temporal and spatial coordinates of individual crimes and identifies patterns in them to predict future events.

He divides the city into spatial tiles approximately 1,000 feet across and predicts crime in those areas.

The computer model was trained on historical data on criminal incidents in the city of Chicago from 2014 to the end of 2016.

He then predicted the crime rate for the weeks following this training period.

The incidents he trained on fell into two broad categories of events that are less susceptible to enforcement bias.

These were violent crimes such as murder, assault, and battery, as well as crimes against property, including burglary, theft, and auto theft.

These incidents were also more frequently reported to the police in urban areas where there is historical mistrust and lack of cooperation with law enforcement.

The model also takes into account the temporal and spatial coordinates of individual crimes and identifies patterns in them to predict future events.

He divides the city into spatial tiles approximately 1,000 feet across and predicts crime in those areas.

This is contrary to the fact that the districts are considered as “hotspots” of crime, which spread to the surrounding areas, as was done in previous studies.

Hotspots often rely on traditional neighborhoods or political boundaries that are also subject to bias.

Co-author Dr. James Evans said: “Spatial models ignore the natural topology of a city.

“Transport networks take into account streets, footpaths, rail and bus lines, and communication networks take into account similar socio-economic conditions.

“Our model allows us to detect these connections.

“We are demonstrating the importance of discovering city-specific patterns in predicting reported crime, which provides a fresh perspective on neighborhoods in the city, allows us to ask new questions, and allows us to evaluate police performance in new ways.”

According to the results published yesterday in Nature Human Behaviorthe model worked just as well on data from seven other US cities as it did in Chicago.

These were Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland and San Francisco.

A graphic showing the modeling approach of an AI tool.  The city is divided into small spatial fragments, approximately 1.5 times the size of an average city block, and the model calculates patterns in successive streams of events recorded on separate fragments.

A graphic showing the modeling approach of an AI tool. The city is divided into small spatial fragments, approximately 1.5 times the size of an average city block, and the model calculates patterns in successive streams of events recorded on separate fragments.

The model also showed that when crimes were committed in a more prosperous area, they attracted more police resources and resulted in more arrests than crimes in disadvantaged areas.

The model also showed that when crimes were committed in a more prosperous area, they attracted more police resources and resulted in more arrests than crimes in disadvantaged areas.

The researchers then used the model to study the police response to incidents in areas with different socioeconomic backgrounds.

They found that when crimes were committed in wealthier areas, they drew on more police resources and resulted in more arrests than crimes in disadvantaged areas.

This implies bias in police responses and law enforcement.

Senior author Dr. Ishanu Chattopadhyay said: “We see that when you stress the system, it needs more resources to arrest more people in response to crime in a wealthy area and diverts police resources from areas of lower socioeconomic status. ‘

The use of computer models in law enforcement has generated controversy as there is concern that it may further exacerbate existing police prejudices.

Back in 2016, the Chicago Police Department piloted an algorithm that created a list of people who were thought to be at the highest risk of being involved in a shooting as a victim or perpetrator.

The details of the results were initially kept under wraps, but it was eventually revealed that 56% of Chicago black men aged 20 to 29 were on the list.

This was stated by Lawrence Sherman from the Cambridge Center for Evidence Police. New scientist that he is concerned that the new model considers data subject to bias.

He said: “This may reflect deliberate discrimination by the police in certain areas.”

Accuracy of model predictions for property and violent crime in major US cities.  a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin.  All these cities demonstrate relatively high predictive performance.

Accuracy of model predictions for property and violent crime in major US cities. a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin. All these cities demonstrate relatively high predictive performance.

However, this tool is not intended to direct police officers to areas where it predicts crime may occur, but is used to inform current police strategies and policies, according to its developers.

The data and algorithm used in the study were made public so that other researchers could investigate the results.

Dr. Chattopadhyay said: “We have created a digital twin of the urban environment. If you give him data about what happened in the past, he will tell you what will happen in the future.

“It’s not magic, there are limitations, but we’ve tested it and it works very well.

“Now you can use this as a simulation tool to see what happens if crime increases in one area of ​​the city or law enforcement increases in another area.

“If you apply all these different variables, you can see how the systems react to it.”

Can an AI lie detector that reads faces tell the police when suspects are lying?

Forget the old “good cop, bad cop” routine – police may soon be turning to AI systems that can reveal a suspect’s true emotions during interrogations.

Face scanning technology will be based on microexpressions, tiny involuntary facial movements that betray true feelings and even show when people are lying.

LondonStartup Facesoft taught AI microexpressions of real people’s faces, as well as a database of 300 million expressions.

The firm is in talks with UK and Mumbai police about possible practical applications of AI technology.

Read more here