An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz’s own AI tool, Claude, to format references.
The incident reinforces what’s becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot’s output, despite the consequences.
The flawed citations, or “hallucinations,” appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023.
But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3.
Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen’s testimony via Google search, she used Anthropic’s Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading.
Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors
“After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article,” explained Dukanovic in her May 15, 2025 declaration [PDF].
“Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.
“Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.”
But Dukanovic pushed back against the suggestion from the plaintiff’s legal team that Chen’s declaration was false.
“This was an embarrassing and unintentional mistake,” she said in her filing with the court. “The article in question genuinely exists, was reviewed by Ms. Chen and supports her opinion on the proper margin of error to use for sampling. The insinuation that Ms. Chen’s opinion was influenced by false or fabricated information is thus incorrect. As is the insinuation that Ms. Chen lacks support for her opinion.”
Dukanovic said Latham & Watkins has implemented procedures “to ensure that this does not occur again.”
The hallucinations of AI models keep showing up in court filings.
Last week, in a plaintiff’s claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff’s attorneys for misleading him with AI-generated text. He directed the plaintiff’s legal team to pay more than $30,000 in court costs that they wouldn’t have otherwise had to bear.
After reviewing a supplemental brief filed by the plaintiffs, Wilner found that “approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way.”
Two of the citations, he said, do not exist, and several cited phony judicial opinions.
Even with recent advances, no reasonably competent attorney should out-source research and writing to [AI] – particularly without any attempt to verify the accuracy of that material
“The lawyers’ declarations ultimately made clear that the source of this problem was the inappropriate use of, and reliance on, AI tools,” Wilner wrote in his order.
Winer’s analysis of the misstep is scathing. “I conclude that the lawyers involved in filing the Original and Revised Briefs collectively acted in a manner that was tantamount to bad faith,” he wrote. “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong.
“Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology – particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way.”
According to Wilner, courts are increasingly called upon to evaluate “the conduct of lawyers and pro se litigants [representing themselves] who improperly use AI in submissions to judges.”
That’s evident in cases like Mata v. Avianca, Inc, United States v. Hayes, and United States v. Cohen.
The judge tossed expert testimony [PDF] in another case involving Minnesota Attorney General Keith Ellison – Kohls et al v. Ellison et al. – after learning that the expert’s submission to the court contained AI falsehoods.
And when AI goes wrong, it generally doesn’t go well for the lawyers involved. Attorneys from law firm Morgan & Morgan were sanctioned [PDF] in February after a Wyoming federal judge found they submitted a filing containing multiple fictitious case citations generated by the firm’s in-house AI tool.
In his sanctions order, US District Judge Kelly Rankin made clear that attorneys are accountable if they submit documents with AI-generated errors.
“An attorney who signs a document certifies they made a reasonable inquiry into the existing law,” he wrote. “While technology continues to change, this requirement remains the same.”
One law prof believes that fines won’t be enough – lawyers who abuse AI should be disciplined personally.
“The quickest way to deter lawyers from failing to cite check their filings is for state bars to make the submission of hallucinated citations in court pleadings, submitted without cite checking by the lawyers, grounds for disciplinary action, including potential suspension of bar licenses,” said Edward Lee, a professor of law at Santa Clara University. “The courts’ monetary sanctions alone will not likely stem this practice.” ®