US court dismisses defamation case against OpenAI over inaccurate ChatGPT response

A radio host had alleged that ChatGPT defamed him by falsely stating that he was a plaintiff in a 2023 federal lawsuit.
Open AI
Open AI
Published on
2 min read

A United States court recently dismissed a defamation suit against artificial intelligence company OpenAI, ruling that the tech firm cannot be held liable for erroneous text generated by its chatbot, ChatGPT.

In an order issued on May 19, 2025, Judge Tracie Cason of the Superior Court of Gwinnett County, Georgia granted summary judgment in favour of OpenAI LLC.

Radio host and gun rights advocate Mark Walters had alleged that ChatGPT defamed him by falsely stating that he was a plaintiff in a 2023 federal lawsuit involving the Second Amendment Foundation (SAF) and that he had been accused of financial misconduct. Walters, however, was not a party to the lawsuit, nor was he mentioned in the actual court filing.

The allegedly defamatory output was generated on May 3, 2023, when Frederick Riehl, editor of the pro-Second Amendment website AmmoLand.com, used ChatGPT to summarise a press release and legal complaint filed by the SAF against Washington Attorney General Bob Ferguson. Riehl, who was also a member of SAF’s board at the time, pasted excerpts of the Ferguson complaint into ChatGPT, which the system summarised accurately. However, when he later provided a URL to the complaint and asked ChatGPT to summarise it, the chatbot responded with a fabricated summary naming Walters as a plaintiff.

Riehl, who testified that he had prior experience with “flat-out fictional responses” from ChatGPT, did not publish or rely on the erroneous output. The court found this fact critical, noting that the output was never disseminated and, therefore, could not meet the publication requirement for a defamation claim.

"OpenAI argues that the challenged ChatGPT output does not communicate defamatory meaning as a matter of law. The Court agrees with OpenAI," Judge Cason wrote.

The court emphasised that ChatGPT had provided multiple warnings about potential inaccuracies. "Given the probabilistic nature of machine learning, use of [ChatGPT] may in some situations result in incorrect Output that does not accurately reflect real people, places, or fact," the system warned users.

"OpenAI exercised reasonable care in designing and releasing ChatGPT based on both (1) the industry-leading efforts OpenAI undertook to maximize alignment of ChatGPT's output to the user's intent and therefore reduce the likelihood of hallucination; and (2) providing robust and recurrent warnings to users about the possibility of hallucinations," the court ruled.

The judge rejected arguments that OpenAI should be held liable simply for operating a system capable of errors, stating:

"Such a rule would impose a standard of strict liability, not negligence."

It also noted that Walters conceded at his deposition that he did not incur actual damages and is not seeking actual damages.

Further, it accounted for the fact that the journalist who received the false information quickly identified it as inaccurate.

"The Court finds that the challenged output does not communicate defamatory meaning as a matter of law. This alone is sufficient to require summary judgment in favor of OpenAI."

Read Judgment

Attachment
PDF
Mark_Walters_v_OpenAI_1750311692
Preview
Bar and Bench - Indian Legal news
www-barandbench-com.demo.remotlog.com