The AI-Justice Paradox: AI Has Been Partially Banned In The NSW Legal System
Will law be the last industry standing when it comes to resisting AI transformation?
Hello dear reader,
There are many different conversations happening about generative AI at the moment — some more productive than others.
Undoubtedly, one of the most prominent is: as a tsunami of transformation sweeps through every industry, will any stone be left unturned?
But, we may have found the final frontier where AI just won’t fly — well, at least if NSW’s Chief of Justice Andrew Bell has anything to do with it.
Bell has imposed a partial ban on the use of generative AI amongst lawyers, unrepresented litigants, and judges in NSW. The practice note prohibited using technology to generate the content of documents like witness statements, affidavits, and character references.
Explaining his reasoning, Bell told the ABC: "The task of judging in our society is a human one. Frankly, the power of those who control big tech and the ability to influence what underlying data is and is not included in the system causes me to think that we should be cautious."
His concerns aren’t unfounded, coming off the back of several cases where time-pressed lawyers have been caught using AI to automate part of their jobs.
Last year, one Australian lawyer found himself with a back injury and short on time when working on an immigration case. He used ChatGPT to help him create a case summary, which he then presented in court… only to be called back two weeks later by the immigration minister who had found 17 cases in the document that didn’t exist. Talk about a bad day at work.
Meanwhile, in the US, perhaps the best-known case of this is Mata v Avianca, wherein the lawyers submitted a brief containing fake extracts and were fired, fined, and publicly scrutinised.
The proposed approach in NSW isn’t quite so hardline, with Bell simply requesting legal professionals refrain from using ChatGPT in their research (operating under a trust system).
It begs the question — if the concept of justice is inherently a human construct, can we put any part of the process in the hands of non-human intelligence? But what if bringing in AI makes the system better, fairer and more just for everyone? Would it be ok then?
As Bell said himself:
"I think if that were to be abdicated to machines, a very important part of our democratic fabric would be lost."
As a cognitive neuroscientist who studies how our brains process information and make decisions, I find this reaction both predictable and problematic.
The Hungry Judge Effect (And Other Human Glitches)
Here's the cognitive elephant in the courtroom: human judges are magnificent biological machines running on wetware that's influenced by everything from blood sugar levels to the performance of their favourite sports team.
The data is striking. As lunchtime approaches, judges grant fewer paroles. On hotter days, sentences grow harsher. When local teams lose, penalties mysteriously increase. Our legal system operates at the mercy of neurotransmitters, hormones, and unconscious biases that no amount of judicial training can completely override.
Male judges tend toward harsher punishments. Racial biases creep in. How you appear in court, your perceived education level, your socioeconomic markers – all trigger a cascade of effects in judicial neural pathways that influence outcomes in ways we'd rather not acknowledge.
The Self-Driving Car Paradox
This ban reflects the same cognitive bias we see with self-driving vehicles. When an AI makes a mistake, it feels preventable – as though a human would have done better – even when the data shows otherwise. We fixate on AI hallucinations while ignoring our own perceptual and cognitive hallucinations that happen constantly.
Augmented Justice: The Path Forward
The future isn't AI replacing judges. It's augmentation – human judgment supported by AI assistance that compensates for our neurological quirks. The combination could cancel out the weaknesses of both systems, creating something more fair than either alone.
I predict that within a decade, practising law, medicine, or other high-stakes professions without AI assistance will be considered negligent, perhaps even illegal.
By closing the door to technological assistance now, we're not protecting judicial wisdom – we're preserving judicial fallibility.
For our legal system to truly serve justice, it must acknowledge that humans in black robes are still humans, with all the neural baggage that entails.
Mental meanderings
Do you think generative AI can have any role in court? What might this look like?
What industries leave you most concerned about the proliferation of AI?
Let us know in the comments below!
I used chatgpt4.5 to fact check claims made about judicial bias correlating to things such as time of day, weather, sports team performance. What I learned is that at best the assertions you make are from analysis of small studies, &| studies with questionable methods &| the general assertion has a great deal of research which does not support the assertions made.
I expect bias in human judges does exist. The stated examples do not demonstrate this clearly.
https://chatgpt.com/share/67ecb99d-f3fc-8011-9548-5d0d7f626b0d
AI worries me because it removes vital elements of being human. If generative AI can write everything I ever need, why would I make any effort to do it myself? If generative AI answers every question (badly, might I add - or through plagiarism, such a good baseline for a research engine) why bother looking for information myself? The problem is that the human brain needs interaction to learn. Getting AI to write you cliff notes (or do your work for you) reduces your ability to use any of that information later.
You say that humans are fallible - won't deny it, but you can track those insecurities and biases and mistakes without too much trouble. We've been doing it for millennia. I'm just not willing to lose brain capacity so a program can make my life "easier". YMMV.