Phantom Justice: The Problem of Ghost Precedents

Shubh Ashish SinghLaw

Phantom-Justice-The-Problem-of-Ghost-Precedents.

Artificial Intelligence (AI) has transformed the legal field altogether, from how academia researches to how courts analyse, interpret, and render judgments. The ease of work enabled by incorporating AI into the legal domain comes with its cross-parallel issues, i.e., fake, untrue, biased, and incorrect content. For example, ghost precedents are judgments generated solely by an AI engine, with no actual existence. The use of fake, inaccurate or non-existent content in court documents is now becoming a common practice, though it should not be. These AI-generated fake precedents threaten and undermine the substance, foundation, and workings of the legal system, as well as the very integrity of judicial institutions.

How do they exist?

Large Language Models (LLMs) often hallucinate, i.e., they produce a failed attempt to predict a suitable response to a user input. They do this to fill gaps in their existing knowledge and to address requirements for input, much as humans often do. While this hallucination is more often than not understood as an aspect of indigenous creativity and imagination, it is not much appreciated in the legal domain. This outcome might be due to misinformation fed into an AI, or even to an AI system having algorithmic flaws.

In 2023, New York-based lawyers used fake extracts and incorrect citations of the popular case Mata v. Avianca. Similarly, in another case, Trump’s personal lawyer sent a ghost precedent to his own attorney.

Determining liability for ghost precedents in India

Determining liability for these ghost precedents could be as simple as starting a chain of blame game with the next stakeholder on the list. Yet, actual liability can only be determined by understanding the individual who owns or generates them. Determining ownership of AI-generated work is essential to understanding the logic behind the blame game. However, ownership of AI-generated work, regardless of the extent of human involvement, is a much-debated issue.

In India, the Copyright Act, 1957, in Section 2(d)(vi) defines the author of computer-generated works as the person who causes the work to be created. A simple interpretation of this definition would include the person giving the set prompt, the person feeding the AI training data, or even the developers of an AI system, while excluding AI systems entirely from liability. Section 21(f) of Ireland’s Copyright and Related Rights Act 2000 and Section 9(3) of the UK’s Copyright, Designs and Patents Act 1988 also maintain a similar position. Until accountability and ownership of AI platforms and generated content are uniformly determined, the humans involved in the process retain ownership and responsibility for them.

Liability for using ghost precedents

The liability for using ghost precedents would rely on the intent and purpose behind the use. For example, a student using a ghost precedent while submitting a college assignment would have different liability than a lawyer using such precedents in court filings.

A lawyer knowingly using a ghost precedent with an evident aim to mislead the court may face disciplinary action and even contempt of court proceedings. While such use increases the risk of dismissal, it also constitutes professional misconduct. Whether such a case can fall under forgery remains to be seen.

From an academic research perspective, using ghost precedents can severely undermine a researcher’s credibility, leading to allegations of academic dishonesty. Academic penalties such as reduced marks/grades, rejection of research publications, and, in the worst case, suspension can also follow.

Addressing the issue

In a case involving ghost precedents, the District Court of the Eastern District of Texas imposed a $ 2,000 fine on a lawyer and required him to complete a course on generative AI. More often than not, legal professionals, like any other user, remain under the assumption that every output generated by an AI platform is accurate and factually correct. The following can be potential solutions to address the issue of ghost precedents in the legal domain:

  • Mandatory cross-verification of citations with official legal databases, before any document is officially filed with a court of law
  • Disclosures on the use of AI in preparing legal documents
  • Promoting AI literacy for all the stakeholders involved
  • Encouraging anonymous reporting of AI-based misinformation
Conclusion

The integration of AI into the legal domain is now inevitable and unavoidable. While the use of LLMs has revolutionised research, drafting, and analysis, the emergence of ghost precedents raises ethical concerns. They are not only an anomaly in the matrix but an undeviating challenge to a legal professional’s credibility. While LLMs have proven helpful in research and data analysis, human oversight is required for any use of AI-generated content in official documents. There must be a responsible, balanced approach to integrating AI, rather than a steadfast accumulation or a straight-up barrage, ensuring that the legal community embraces AI with caution.