Nov 20, 2024

First Defamation Lawsuit for Open AI Filed, as Radio Host Sues for Defamation, Creating Test for Future AI Suits

by Diane Lilli | Jun 12, 2023
A hand holding a smartphone displaying the ChatGPT interface, with a greeting message on the screen and a dark background featuring the ChatGPT logo. Photo Source: Ascannio - stock.adobe.com

A Georgia-based radio host, Mark Walters, sued OpenAI in the first defamation lawsuit of its kind. OpenAI owns and operates the now-viral platform ChatGPT, a program where AI answers questions, writes articles, and also offers written content that is sometimes fabricated and not factual.

Walters filed the lawsuit on June 5th in Georgia’s Superior Court of Gwinnett County. He is seeking unspecified monetary damages from OpenAI/ChatGPT.

In the lawsuit, Walters said ChatGPT stated as fact defamatory, false information, including false news that he had been accused of defrauding and embezzling funds from a non-profit organization.

ChatGPT replies to requests from submitters, often including queries from journalists, and in this case, reporter Fred Riehl asked the AI for information and was given a fictional reply.

Numerous users of Chat GPT have reported erroneous content created by the program. The major issue with content created by the AI system is that the programs cannot understand fact from fiction in many cases. For example, if a user submits a question asking for specific information in a leading manner, the AI system may just make up facts and historical fiction and present the reply as accurate data.

With many cases being reported about the new AI systems giving fictional replies to users, it is expected that many more lawsuits will follow. When ChatGPT offered false information about radio host Walters, it was harmful to his reputation. The fictional content created about Walters being involved in serious illegal breaches of the law such as defrauding and embezzling from a non-profit organization is daunting.

But does current US law cover a lawsuit such as Walters’, who is suing an AI system? These legal matters are in uncharted waters. Section 230 of the Communications Decency Act is famous for protecting internet companies from being liable for any content created by a third party and then shared or hosted on their sites. Will Section 230 be used in Chatbot systems’ lawsuits? While Section 230 protects internet providers from being sued as “publishers” of false information, it does not necessarily protect the original party that made the false statement, in this case ChatGPT and its owner Open AI.

In the agreement with ChatGPT, there is a disclaimer telling users that the program might “occasionally generate incorrect information.” However, the company also states that ChatGPT is very trustworthy and says it’s a tool to “get answers and learn something new.”

Liability lawsuits against AI systems such as ChatGPT are viable because creating and proliferating false, derogatory news about any person is illegal.

Court documents filed by Walters say journalist Riehl asked for a factual federal court case to be summarized and sent a PDF link to verify that case. However, ChatGPT replied with a made-up summary of the case, including both fiction and fact in some areas. For example, ChatGPT sent the journalist a document with false allegations against the radio host.

Some of the fictional allegations against Walters include that he embezzled funds from Second Amendment Foundation, a non-profit, in “excess of $5,000,000.” This is not factual but was created by ChatGPT and sent as fact to the reporter. Riehl did not publish the fake content created by the Chatbot, since he checked the data with a second source.

In a domino effect created by ChatGPT, which is not supposed to be able to use any PDFs or other external data to create new content, the AI did accept and use the outside information to create the false allegations in the information it provided.

For now, all eyes, including ChatGPT’s, will be on Walters’ lawsuit. He is seeking unspecified monetary damages.

Share This Article

If you found this article insightful, consider sharing it with your network.

Diane Lilli
Diane Lilli
Diane Lilli is an award-winning Journalist, Editor, and Author with over 18 years of experience contributing to New Jersey news outlets, both in print and online. Notably, she played a pivotal role in launching the first daily digital newspaper, Jersey Tomato Press, in 2005. Her work has been featured in various newspapers, journals, magazines, and literary publications across the nation. Diane is the proud recipient of the Shirley Chisholm Journalism Award.

Related Articles

A futuristic scene depicting a humanoid robot artist painting a vibrant and colorful portrait on a canvas.
AI Art Generators Face Several Lawsuits from Artists and Others

The rise of AI digital image-generating applications and software has made it easy for individuals and businesses alike to create masterfully crafted artwork in mere seconds. For some artists, however, these groundbreaking tools are nothing more than copyright violations against actual, human artists. As these AI image generators continue to... Read More »