AI Blunder Costs California Attorney $10,000 Fine for Fabricated Legal Citations

Published 2 months ago5 minute read
Uche Emeka
Uche Emeka
AI Blunder Costs California Attorney $10,000 Fine for Fabricated Legal Citations

A California attorney, Amir Mostafavi, has been ordered to pay a $10,000 fine for filing a state court appeal that contained numerous fake quotations generated by the artificial intelligence tool ChatGPT. This penalty represents what appears to be the largest fine issued by a California court for AI fabrications, with the court's blistering opinion noting that 21 of the 23 case quotes cited in Mostafavi’s opening brief were entirely fabricated. The court also highlighted that many out-of-state and federal courts have dealt with similar issues involving attorneys citing fake legal authority.

In its September 12 opinion, California’s 2nd District Court of Appeal issued a stern warning, stating: “Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations — whether provided by generative AI or any other source — that the attorney responsible for submitting the pleading has not personally read and verified.” This ruling exemplifies the urgency with which California’s legal authorities are moving to regulate AI use in the judiciary. The state’s Judicial Council recently mandated that judges and court staff either prohibit generative AI or implement a usage policy by December 15. Concurrently, the California Bar Association is considering strengthening its code of conduct to address various forms of AI, following a request from the California Supreme Court.

Mostafavi informed the court that he did not personally read the text generated by the AI model before submitting the appeal in July 2023. He explained to CalMatters that he used ChatGPT to try and improve an appeal he had already written, claiming he was unaware it would add case citations or invent information. Despite OpenAI marketing ChatGPT as capable of passing the bar exam, a three-judge panel fined Mostafavi for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court’s time and taxpayer money. Mostafavi believes it is unrealistic to expect lawyers to cease using AI, likening it to the shift from law libraries to online databases, but advises caution until AI systems stop hallucinating fake information. He lamented, “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages.”

The $10,000 fine is considered the most costly penalty issued to an attorney by a California state court and among the highest ever for AI use by lawyers, according to Damien Charlotin, an expert tracking such instances globally. Charlotin noted a widely publicized case in May where a U.S. District Court judge in California ordered two law firms to pay $31,100 for costs associated with “bogus AI-generated research,” emphasizing the need for “strong deterrence.” He predicts an exponential rise in these cases, having observed an increase from a few cases per month to several daily since he began tracking earlier this year.

Large language models are prone to confidently stating falsehoods as facts, particularly when supporting information is lacking. Charlotin explains that “the harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you,” illustrating how confirmation bias can influence AI output. A May 2024 analysis by Stanford University’s RegLab indicated that while three out of four lawyers intend to use generative AI in their practice, some AI forms generate hallucinations in one out of three queries. Detecting fake material in legal filings is expected to become more challenging as AI models advance in size and complexity.

Another tracking project identifies 52 cases in California and over 600 nationwide where lawyers have cited nonexistent legal authority due to AI use. Nicholas Sanctis, a law student, attributes this expected increase to AI innovation outpacing attorney education. Jenny Wondracek, who leads this tracking project, anticipates the trend will worsen, citing that many lawyers remain unaware that AI invents information or mistakenly believe legal tech tools can eliminate all fake content. She suggests that a fundamental understanding of the technology's limitations could significantly reduce such incidents.

Wondracek suspects that more instances of AI-generated fake cases occur in state court filings than in federal courts, though a lack of standardized filing methods makes verification difficult. She frequently encounters fake cases among overburdened attorneys or individuals representing themselves in family court. While the number of arguments filed by attorneys using AI and citing fake cases is expected to rise, Wondracek has also documented three recent instances of judges themselves citing fake legal authority in their decisions, indicating a broader systemic issue.

As California grapples with how to address generative AI and fake case citations, Wondracek suggests considering approaches adopted by other states, such as temporary suspensions, mandatory courses on ethical AI use, or requiring implicated attorneys to teach law students how to avoid similar mistakes. Mark McKenna, co-director of the UCLA Institute of Technology, Law & Policy, lauded fines like Mostafavi's as essential for punishing "an abdication of your responsibility as a party representing someone."

Both McKenna and UCLA School of Law professor Andrew Selbst believe the problem will intensify before it improves. Selbst points out the pressure on recent law school graduates working as clerks and students to adopt AI, leading to widespread adoption in firms and schools without adequate consideration of the consequences. He concluded, “This is getting shoved down all our throats... and we have not yet grappled with the consequences of that.”

Loading...
Loading...
Loading...

You may also like...