
Delhi HC Permits Plea Withdrawal After Petitioner Caught Using Fake ChatGPT-Generated Case Laws
In a shocking incident, which reflects the teething of the concept of using artificial intelligence in professional spheres, the Delhi High Court saw a lawyer requesting to withdraw a petition due to the discovery that the legal precedents he was referring to in the case were entirely a hoax. The attorney acknowledged that the fake non-existing case laws were created by the trendy AI software, ChatGPT. The episode led to a severe reaction of the court and the beginning of an acute discussion of the ethics of applying AI to the legal environment.
The case that was subject to hearing got dramatic twist when the court started examining the legal authorities that were presented by the counsel of the petitioner. Investigations by the judge showed that the citations which are supposed to form the foundation of any legal argument were completely false. This embarrassing and severe professional failure was used as a live example of what dangers too heavy dependence on generative AI without the correct verification can cause.
A Misstep in the High Court
The case occurred when a hearing of a routine was taking place in the courtroom of Justice Prathiba M. Singh. The petitioner was being represented by the lawyer who had submitted their arguments along with multiple references to the previous court decisions who were used to back up the argument. Using the precedents of cases in the past is a practice, and is therefore necessary in law, as judges can use the precedents to make consistent rulings, relying on how similar cases were decided in the past. It is on its basis that arguments on law are constructed.
But on reviewing these citations by the judge and the opposing counsel, they were unable to get any record of the said cases. There were no names of the parties, the dates, and the journals where they were allegedly published. When the court asked the lawyer about the grave mistake, he admitted that he has used ChatGPT to conduct his own research and embedded the case laws generated by the AI in his plea without verifying their authenticity.
With this major professional misconduct, the lawyer was left with no option but to seek authorization of the court to withdraw the whole petition. This confession put the brakes on the proceedings and the issue of the case was replaced with the unprofessional behavior of a lawyer. The case left the judiciary to address an issue that is at least gaining traction in the online era: the presentation of AI-made fake news in a formal legal office.
The Dangers of AI ‘Hallucinations’
This event can be used as a typical example of a phenomenon called AI hallucination. ChatGPT and other Large Language Models are created to generate and predict human text according to the trends of the enormous amount of data upon which they were trained. Although they are mighty to an extremely high level, they lack actual knowledge and a body of facts.
ChatGPT will provide text when requested to provide legal citations that appear and sound like a veritable court ruling due to its ability to analyze millions of real legal documents. It will be developing believable names, dates and legal arguments. Nevertheless, it tends to fabricate these facts ad hoc and combines and recombines facts to produce a persuasive but completely false document. In the case of the AI, it is meant to give a consistent response and not necessarily the truth.
This propensity of making assertions with fabricated information boldly as true is the greatest trap in serious research by using such AI tools. When it comes to a scientific discipline such as law, where accuracy and verifiable truth are quite important, it is highly perilous to count on an unverified AI output. It is tantamount to referring to a book that was not written, and this may mislead the court, waste its time, as well as compromise the integrity of the whole judicial procedure.
The Court’s Stern Rebuke
Justice Prathiba M. Singh approached the issue very seriously and said that she was very concerned about this emerging and disturbing trend. According to the court, it is a serious offense that cannot be underestimated because it is equivalent to deceiving the court and bending the course of justice by using bogus citations. Although the judge did allow the withdrawal of the plea, a severe warning was given and the failure of diligence on the part of the lawyer.
The court insisted that AI can be a helpful tool but not the hard work and due diligence that has to be expected of a legal professional. It is the basic obligation of every lawyer to ensure the information he presents in court is checked. It is a dereliction of this duty to rely blindly on a machine without cross checking the facts with the facts found in legal databases that are reliable. The judge was categorical that such negligence would not be accepted in future.
Due to the action, the court might have started contempt proceedings or have the case taken to the bar council and disciplinary measures taken against the party. The courts have also in most of such instances awarded heavy costs to the lawyers or litigants who have wasted judicial time. The event had a strong message to the whole legal fraternity on the professional and ethical accountability that should be taken, irrespective of the technology involved.
A Wake-Up Call for the Legal Profession
The incidence in the Delhi High Court is an important wake up call to lawyers, law students and legal institutions in India and globally. The development of AI is more than fast, which is both amazing and challenging. Although AI can help to write documents and accelerate the research, this event demonstrates that it should be applied with great care and strong control.
This new reality now has to find its way to legal education and professional training. Bar associations and law schools should come up with concise guidelines and ethics regulations on how AI should be used in law practice. Lawyers need to be trained to know the drawbacks of these technologies, especially the threat of hallucinations and taught on how to rightfully refute AI-generated information by applying the usual and dependable method of legal research.