Algorithmic Justice – Navigating Efficiency and Bias in the Age of AI

Published by: Anujah Muthukrishna Selvam

 
 

Source: University of Michigan - AI and the scales of justice

Fathom a legal system that navigates the onerous complexities of paperwork and administration at the speed of algorithms – a vision that truly might adjourn our court as we see it today.

 As we welcome human innovation through the conduit of our artificial counterparts, the marriage of artificial intelligence and the legal realm strikes as a compelling and imperative discussion.

The Promises

Our urgency for efficiency
Nowadays where justice delayed is often tantamount to justice denied, the introduction of AI furnishes a promising solution. An article published by the New York Times contends that now our “clever software” has the capability to tackle the “toil of legal work — searching, reviewing and mining mountains of legal documents for nuggets of useful information” (The New York Times, 2023). With natural language processing’s (NLP) ability to cogitate the relationship between the constituent parts of language – such as letters, words, and sentences- the ability to understand, interpret and respond meaningfully to human dialect is no longer a skill sole to humanity itself. The taxing duties of ‘scanning and predicting what documents will be relevant to a case’ can be accomplished without burdening the human eye, furnishing the domain for a flourishing legal system in which human labour is liberated to prioritise the tasks that mandate critical analysis against the arduous dog-work.

 
 

Source: Natural Language Processing Technology Used in Artificial Intelligence Scene of Law for Human Behavior, - The flowchart of the NLP-based legal retrieval system

The Pitfalls

The impermeable bias

Yet, it is the very same efficiency of these algorithms that stir room for implicit bias in our justice system. The limitations are bound by the data the machines are trained on- the computer science vernacular ‘garbage in, garbage out’ captures the crux of this dilemma. The machines are programmed to obey the commands given by the programmer, meaning flawed data in its development only ensures flawed output (West Virginia University, 2023). Most algorithms are innovated with pure intentions, but questions have emerged regarding algorithmic bias in several domains such as employment search websites, credit reporting bureaux, on social media websites and even in the criminal justice system, where sentencing and parole decisions appear to be prejudiced against African Americans (Stevenson & Doleac, 2023).


Such bias is even evidenced by Facial recognition AI. Like always, the algorithm’s habit to adhere to its “training” means that input data classified as atypical or diverse are less likely to be catered for. In 2018, researchers Joy Buolamwini and Timnit Gebru discovered that if the subject in a photograph was a white male, the system’s guessing accuracy was more than 99%. Contrarily, the accuracy for identifying Black women is almost half of the prior, standing at 50%. Indeed, this output is unsurprising when it is learnt that most face-analysis programs are coded and trialled using large databases of pictures, who are predominantly white men. (Buolamwini and Gebru, 2018). In turn, darker faces are not recognised accurately when facial recognition AI is employed in real world systems, even when used by law enforcement to scan driver’s license photos. This bias is undoubtedly entrenched in these systems, with an edified Ghanian computer scientist Joy Buolamwini unveiling the algorithm is greater inclined to identify her under a painted white mask than by her natural darker skin (Lee, 2020).

Source: Los Angeles Times - Joy Buolamwini and her book “Unmasking AI”

 

These concerns are inclined to become exacerbated as machine learning and predictive analytics advance in quality. The machine’s instant ability to learn autonomously carves potential for a near future in which humans can often no longer explain or cogitate their complexity. More pressingly, the true peril lies in humanity’s staunch trust in such machinery. Once computed, the mind of the machine strikes to the human mind as comparable to that of a calculator- flawless. Beneath the parade of glorious progress lurks a precarious state of complacency, in which the bias gone in may be simply neglected, and perhaps never found.

The surveillance scam

And yet, the facial recognition mishaps do not stop there. Recent technology has future potential to reach intelligence strong enough to predict one’s behaviour. New AI software is being used in Japan to monitor the body language of shoppers for cues that they are plotting to engage in theft. Developed by the Japanese company Vaak, their VaakEye system deploys algorithms to analyse footage from security cameras to spot suspicious body signals, namely fidgeting, restlessness and then warn shop employees about potential stealers via an app (Judicial Commission of New South Wales, 2023).

 
 

Source: New Atlas – The VaakEye AI theft-detection system

 

Indeed, such employment of AI in an invasive manner opens an entire new can of worms of ethical quandaries. How can we ensure that the AI is trained in a fair and equitable manner? What contrasts malice from the cues of ordinary anxiety? What divides the fidgeting of a thief from the fidgeting of a person suffering a condition? Akin to all cases, the developers are the ultimate steerers of what the algorithm deems “suspicious”, and in such cases, accusations may be pinned as discriminatory.

In many common law jurisdictions, police typically obtain arrest warrants based on the "reasonable grounds" standard. However, if an AI-equipped camera identifies an individual as a potential criminal, does this meet the threshold of "reasonable grounds"?

Contemplate the intricate role of technology as admissible evidence in our courtrooms. Is it too prejudicial to present findings from AI software, which designates an accused as a criminal, to the fact-finder? Alternatively, consider a scenario where prosecutors leverage this technology within trial proceedings to corroborate their assertions. A plausible closing statement under this system- "Considering the weight of eyewitness testimonies alongside the AI's determination—a confident 80% likelihood of the accused's involvement—can we, in good conscience, reach any conclusion other than guilt?"

Our ever-quenching thirst for progress…

In the last several years, AI innovation and development has proliferated exponentially. Indeed, it is unsurprising that the 2020 COVID crisis is the ultimate agent of this reverberation. Since the pandemic shocked the world, 86% of companies are considering deploying AI “mainstream technology” at their firm in 2021, up from 55% in 2020 who reported accelerating their AI strategy in response to the pandemic (Harvard Business Review, 2021). The global turbulence coerced firms to utilise cheaper methods of operating their business if they wish to survive the repercussive economic downturn, and much of the “cheaper methods” entail AI. AI became an innate trait of company strategy in its diligence and financial feasibility. Ironically, in the power of its cost-effectivity, the systems have debatably exacted an even weightier expense, costing many workers their jobs. Perhaps this was forgivable since firms were floundering to salvage their losses, but what is our excuse now?

The truth is, there isn’t one. As it always begins, what may sprout as a simple fashion of “easing the process”, as history testifies, the process will succumb to its ultimate superior- progress. In this ongoing quest for quick justice befalls fairness and impartiality, with lines of code compiled to appoint one guilty before they even stand on trial. Under the eyes of future law, we may not all be equal.

Source: Getty Images – Algorthmic Bias vs Equality

This article is published by CCA, a student association affiliated with Monash University. Opinions published are not necessarily those of the publishers. CCA and Monash University do not accept any responsibility for the accuracy of information contained in the publication.

CCA