The circuit board of a computer
Alexandre Debieve on Unsplash

May 19, 2021

Researcher explores ways to improve AI-based decision-making in justice system

Gideon Christian hopes to develop legal framework for fairness, transparency

How can a machine be racist? It is difficult for a computer to show emotion or to feel, so to understand that it can judge someone based on race is often difficult to wrap your head around. However, artificial intelligence (AI) tools used in North American criminal justice systems are making decisions based on biased data which results in unfair outcomes to individuals from a particular segment of society distinguished by race.  

Research by Dr. Gideon Christian, PhD, examines the extent to which AI tools used in determining the likelihood of reoffending reinforce implicit and explicit bias against minority groups, especially Black people, who now constitute a disproportionate population in the criminal justice system. Christian’s goal is to develop a legal framework that will enhance efficiency, fairness, transparency and accountability in the use of AI tools in Canada’s criminal justice systems. 

Professor Gideon Christian

Gideon Christian hopes to develop a legal framework that will enhance efficiency, fairness, transparency and accountability in the use of AI tools in Canada’s criminal justice systems.

What are some of the risks of algorithmic bias in the criminal justice system? 

AI technologies are trained and rely on big data to make predictions. Some of this data is historical data from eras of mass incarceration, biased policing, and biased bail and sentencing regimes characterized by systemic discrimination against sections of society. Police practices such as stop-and-frisk or carding and street checks have been routinely criticized for disproportionately targeting young Black and Indigenous people, which have resulted in racially biased data that can influence AI tools trained with that data.  

Second, historical data used in training risk assessment tools used in the criminal justice system may be blind to recent risk reduction and anti-discrimination legislation and policy reforms aimed at addressing the over-representation of particular segments of society in the criminal justice system. 

Are certain people or groups affected by these biases more than others? 

A study of COMPAS — one of the most widely used AI risk assessment tools in the U.S. — revealed cases of racial bias where Black offenders who did not re-offend were falsely assessed by the tool as future criminals almost twice as much as white offenders. This result is illustrative of the tendency of these tools to perpetuate existing discrimination, inequalities and stereotypes against minority populations who are already disproportionately represented in the criminal justice system.  

How can the criminal justice system avoid these biases? 

Unfortunately, officers who rely on assessments made by these tools are not involved in the design of the tools. Worse yet, they have very limited, or no knowledge of the methodology used by these tools in their risk assessments. Because of the tendency of the tools to perpetuate bias, officers should attach the appropriate weight to these algorithmic assessments, and should always take into consideration other factors such as mitigating factors unique to the individuals involved.  

Accepting assessments made by these tools without questioning would amount to abdicating judicial responsibilities to data scientists. 

How can algorithmic bias be corrected? 

The starting point would be to ensure that big data used in training AI tools adequately represents the population or groups where the tools would be deployed.  

In the 2018 case Ewert v Canada, the Supreme Court of Canada was critical of the Correctional Service of Canada’s use of (non-AI) risk assessment tools that were developed and tested on predominantly non-Indigenous populations in assessing the risk level presented by Indigenous prisoners. 

In essence, these tools are not a “one size fits all race” tool. They should only be deployed or used on populations or groups adequately represented in the data set used to develop and train the tools. 

These tools are not used in Canada yet. How can our justice systems prepare for them? 

Our legal system can prepare for the implementation of algorithmic risk assessment tools by developing a legal framework to mitigate the race-based biases and discriminations arising from their use, as well as enhance the fairness, transparency and accountability in the use of the tools.  

Are there any benefits to using AI technology and tools in the justice system? 

Notwithstanding the biases that may arise from the use of AI tools in criminal justice risk assessment, these tools have also been hailed as representing a new era of scientifically guided sentencing, replacing arbitrary human decisions with more accurate scientific predictions. 

Canadian Innovation Week at UCalgary
As part of UCalgary’s partnership with the Rideau Hall Foundation, we are celebrating Canadian Innovation Week. Join UCalgary experts and researchers May 17-21, for a week of conversation, inspiration and ideas. Learn how you can get involved.