It’s happened in numerous judicial districts across the U.S., and even in foreign courts of law: non-existent legal citations, precedents, and fabricated quotes — all generated by AI, filed by attorneys on behalf of clients, and slipped into the legal record before bewildered judges plucked out the hallucinated content.

It’s AI, to be sure — but in this context, it might just as well stand for Adjudication Inaccuracy.

A growing database called “AI Hallucinations Cases” is tracking such incidents, documenting occasions in which generative AI produced hallucinated material — “typically fake citations, but also other types of AI-generated arguments.” The goal in calling out lawyers who appear to rely too heavily on AI to do their legal drafting, is to draw attention to the problem so that it stops.

“While seeking to be exhaustive (368 cases identified so far), it is a work in progress and will expand as new examples emerge,” states Damien Charlotin, a lawyer and academic who holds a doctorate in law degree from the University of Cambridge. He started the database in April 2025, in the context of a course he taught on AI and the legal profession. 

“I wanted to inform my students how prevalent the problem was, if at all, and could not find data, so I opted to do it myself” Mr. Charlotin said in an interview with Techstrong. “This coincided with an explosion in the number of cases.” 

Los Angeles-based attorney Robert Freund has spotted some cases and added them to Mr. Charlotin’s database. In one case Mr. Freund found, a lawyer cited a supposed 1985 decision — Brasher v. Stewart — that never existed. A judge admonished the lawyer and ordered him to complete six hours of AI training.

When ChatGPT was introduced in 2022, it didn’t take long for the legal profession to notice its potential — and its pitfalls. By 2023, the technology was already at the center of court sanctions. That year, U.S. District Judge P. Kevin Castel, of the Southern District of New York, imposed monetary penalties and sanctions against a law firm for including fake quotes and citations in its court filings.

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Judge Castel wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C., abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

“Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

The database includes cases from Texas, Colorado, Washington, Ohio, Florida, Missouri, and beyond — even one from the U.S. Court of Federal Claims. Each case is classified as fabricated, false quotes, misrepresented, or outdated advice.

In most instances, the attorney or firm receives a warning. In others, judges impose fines — usually a few hundred to several thousand dollars, and sanctions. The database also identifies which AI tools were used — ChatGPT, Copilot, Google AI, Gemini, among others.

One recent addition to the list is a judgment dated October 22, 2025, from the U.S. District Court for the Eastern District of Oklahoma. In Mattox v. Product Innovation Research, the court found 28 false or misleading citations across 11 pleadings filed by the plaintiff’s counsel.

U.S. Magistrate Judge Jason A. Robertson made clear that his ruling wasn’t about the technology, but about trust.

“Justice is built on language, and language draws its power from the hearts and minds that create it,” Judge Robertson said. “Words alone are empty until filled with human conviction. The same is true of every pleading filed before this Court. Generative technology can produce words, but it cannot give them belief. It cannot attach courage, sincerity, truth, or responsibility to what it writes. That remains the sacred duty of the lawyer who signs the page. Across eleven pleadings, that duty was forgotten.”