The criminal justice system continues to undergo significant transformations as technology is critical in detecting, investigating, and prosecuting crimes. With further advancements, there is a growing need to leverage AI to address the challenges faced by the Indian justice system. Former Chief Justice of India, S. A. Bobde, recognised the potential of AI. He stated that,
“we have a possibility of developing Artificial Intelligence for the court system to prevent undue delays in the administration of justice.”
However, integrating AI into the criminal justice system is not without challenges. Common concerns are data privacy, algorithmic bias, and ethical implications. Therefore, the application of AI must accompany appropriate regulations and ethical considerations. As law enforcement agencies play an undeniably important role in the criminal justice system, this article explains what predictive policing is.
What is predictive policing?
Predictive policing has emerged as a powerful tool for identifying patterns in crime data. By analysing information such as time, location, and type of crimes, predictive policing can help forecast where crimes are likely to occur. This capability allows law enforcement agencies strategically allocate their resources, including personnel deployment and camera installation in high-risk areas. The primary objective behind predictive policing is to prevent crime before it happens through targeted interventions in vulnerable areas.
Is this happening in India?
In 2020, the Himachal Pradesh Police deployed over 19,000 CCTV cameras to create a CCTV Surveillance Matrix. The state police planned to install over 68,000 cameras – one for every 100 people. This surveillance matrix serves as the foundation of a predictive policing strategy. Before Himachal, state policies agencies in Delhi, Telangana, and Jharkhand have employed predictive policing. This capability facilitates a proactive approach to crime prevention rather than the usual reactive approach. Delhi Police, in association with ISRO, has developed the CMAPS system for predictive policing in the state. CMAPS is the acronym for Crime Mapping, Analytics, and Predictive System (CMAPS).
While predictive policing has benefits, using such systems in certain jurisdictions has come under fire for perpetuating racial prejudices and increasing police presence in neighbourhoods of colour. For example, in the US, American mathematicians boycotted predictive policing systems, citing their belief that such systems reinforce structural racism. This creates a problematic feedback loop wherein the algorithm becomes more likely to label a group member as a potential criminal the more heavily that group is policed. This leads to further discrimination. In the UK, a Centre for Data Ethics and Innovation (CDEI) study found that the absence of standard guidelines leads to discrimination in police work. These instances underline that AI systems are not inherently neutral or objective. They reflect the biases in the data they are trained on and their designers and creators.
Technological advancements will play an even more significant role for the law enforcement agencies in the future. Apart from bias, another major challenge will be balancing the benefits of predictive policing systems with individuals’ privacy rights and civil liberties. We do not have an objective legislation in India that governs the use of predictive policing algorithms. In some cases, predictive algorithms can have unintended consequences by increasing police surveillance in already over-policed communities due to trends in historical data.
For a layman, it may be a difficult task to understand the workings of predictive police systems. After a certain point, the algorithmic complexity can make it incomprehensible for humans to know how the algorithms function. A line of thought believes that predictive policing is basically government surveillance disguised as an internal security system.
Be it predictive policing or any other application of AI, a potential solution is establishing an independent regulator. This regulator can define the general AI guidelines for transparent, accountable, and ethical AI systems. This regulatory body can collaborate with industry leaders to determine the best practices for specific domains. While predictive policing has succeeded in reducing crimes, the concerns around them are valid and justifiable. As technology advances, the stakeholders must holistically address these challenges. They must ensure that these technologies are used in a manner that aligns with ethical principles, fairness, and justice.
Manav Gupta, an undergraduate student at Jindal Global Law School, worked on an initial version of this article’s draft. With inputs from The Cyber Blog India team.
Featured Image Credits: Image by pikisuperstar on Freepik