When Artificial 'Intelligence' invents Artificial Cases - how to navigate AI use in civil law proceedings?
For those who use ChatGPT as regularly as me (it has recently advised me on everything from the best replacement washing machine, to how to make the best of the season's asparagus, or why, when I recently watched an 80s film with my daughter it felt so unexpectedly culturally dated, and everything in between) its frequent fallibility may have become clear. Indeed, on technical queries I have taken to asking “Are you sure?” as a routine follow-up. As it heaps praise on my astute observation and lawyerly attention to detail, it will correct itself and will do so in tones equally obsequious and apologetic. For AI, it seems, has also now become a chronically sycophantic people-pleaser.
So what happens when you ask a fallible, but extremely imaginative, people-pleasing AI to research cases in support of your niche legal argument? The answer is cases, complete with case references from the High Court and Court of Appeal, that simply don't exist. And, in the now infamous case of Ayinde, R (on the application of) v The London Borough of Haringey [2025] EWHC 1040 (Admin), five of those imaginary cases were actually cited in proceedings.
It seems that the lawyer in question was a young pupil barrister without access to proper research tools, but unfortunately the problem was compounded when copies of the judgments in the cited cases were requested. Whilst an apology was given, no proper explanation was offered and the severity of the matter was brushed off, with the citation of non-existent cases described as “cosmetic errors”.
But I digress. The point is that, whilst such professional misconduct is - one hopes - not likely to be widespread nor easily repeated, given the ubiquity and exponential growth of AI use, how should AI be used in proceedings?
As with everything in life, it seems plain that, as much as AI offers on the upside, it also potentially risks on the downside. The rate and scale of change also presents unique issues. I have recently been listening to Geoffrey Hinton, the Nobel Laureate dubbed “the Godfather of AI”, discuss the up and downsides of AI on a global scale and in that context the potential downsides he mentions pose a panoply of existential threats to humanity. Happily this article concerns itself only with the rather more quotidian matter of the rise of AI in the context of legal proceedings, with correspondingly less potential for annihilation of our species, one hopes.
The recent announcement that the Civil Justice Council has established a new working group to examine the use of AI by legal representatives for preparing court documents is therefore to be warmly welcomed. A consultation paper and subsequent final report will look at rules to govern the use of AI by lawyers when preparing court documents. This will take into account pleadings, witness statements and expert reports.
It is interesting to note that the Chair of the working group, Lord Justice Birss, was the first British Judge who, in 2023, openly reported having used ChatGPT to help in writing a judgment and reported it was “jolly useful”. Indeed it is. But it is not without risk. We clearly can't “beat it [AI]” so no doubt we will all "join it" and guidelines on how to do so safely, fairly and reliably in this important context are essential in so many respects.
One might also, in a meta-sense, wonder how much of the final report might be a product of AI itself. Or, indeed, how much of this article….?
(As for why the 80s film felt so culturally dated, the (paraphrased) answer is apparently because I am old. My asparagus were delicious. And my new washing machine arrives on Thursday.)
The Civil Justice Council has created a working group to examine the use of artificial intelligence in preparing court documents and consider recommending amendments to procedure rules.