OpenAI faces defamation lawsuit as radio host alleges false information by ChatGPT


Image Source : FILE Radio host files defamation lawsuit against OpenAI over false information produced by ChatGPT

Recently, In a defamation lawsuit, radio host Mark Walters has filed a complaint against OpenAI, the Microsoft-backed organization behind ChatGPT. Walters alleges that false information generated by ChatGPT has harmed his reputation. According to reports by The Verge, ChatGPT falsely claimed that Walters had been involved in defrauding and embezzling funds from a non-profit organization.

The incident unfolded when journalist Fred Riehl requested information about Mark Walters from ChatGPT. In response, the AI chatbot provided fabricated details, stating that Walters was responsible for financial misappropriation within the organization he was associated with, among other allegations. Walters, disputing the accuracy of the information, has taken legal action and is seeking unspecified monetary damages from OpenAI.

This lawsuit marks the first instance of legal action taken against OpenAI in response to defamatory content generated by ChatGPT. It raises questions about the accountability and reliability of AI-generated information and its potential impact on individuals’ reputations. 

ALSO READ: WhatsApp brings redesigned emoji keyboard in Android Beta

Reportedly, In a separate incident, attorneys Steven A. Schwartz and Peter LoDuca found themselves facing potential consequences after ChatGPT misled them into including fictitious legal research in a court filing. The attorneys unknowingly included references to non-existent court cases, which were generated by ChatGPT and believed to be genuine by Schwartz. 

These recent events have sparked concerns among legal professionals and prompted a US federal judge, Brantley Starr, to issue a strict directive against the use of AI-generated content in his court. Judge Starr now requires attorneys appearing in his court to affirm that no portion of their filing was drafted by generative artificial intelligence or, if it was, that it was thoroughly reviewed by a human.

In April, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realized ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

ALSO READ: Apple introduces built-in VPN support in tvOS 17: Check details here

Inputs from IANS

Latest Technology News




Image Source : FILE Radio host files defamation lawsuit against OpenAI over false information produced by ChatGPT

Recently, In a defamation lawsuit, radio host Mark Walters has filed a complaint against OpenAI, the Microsoft-backed organization behind ChatGPT. Walters alleges that false information generated by ChatGPT has harmed his reputation. According to reports by The Verge, ChatGPT falsely claimed that Walters had been involved in defrauding and embezzling funds from a non-profit organization.

The incident unfolded when journalist Fred Riehl requested information about Mark Walters from ChatGPT. In response, the AI chatbot provided fabricated details, stating that Walters was responsible for financial misappropriation within the organization he was associated with, among other allegations. Walters, disputing the accuracy of the information, has taken legal action and is seeking unspecified monetary damages from OpenAI.

This lawsuit marks the first instance of legal action taken against OpenAI in response to defamatory content generated by ChatGPT. It raises questions about the accountability and reliability of AI-generated information and its potential impact on individuals’ reputations. 

ALSO READ: WhatsApp brings redesigned emoji keyboard in Android Beta

Reportedly, In a separate incident, attorneys Steven A. Schwartz and Peter LoDuca found themselves facing potential consequences after ChatGPT misled them into including fictitious legal research in a court filing. The attorneys unknowingly included references to non-existent court cases, which were generated by ChatGPT and believed to be genuine by Schwartz. 

These recent events have sparked concerns among legal professionals and prompted a US federal judge, Brantley Starr, to issue a strict directive against the use of AI-generated content in his court. Judge Starr now requires attorneys appearing in his court to affirm that no portion of their filing was drafted by generative artificial intelligence or, if it was, that it was thoroughly reviewed by a human.

In April, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realized ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

ALSO READ: Apple introduces built-in VPN support in tvOS 17: Check details here

Inputs from IANS

Latest Technology News

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
AllegesChatGPTchatgpt appChatGPT newschatgpt sueddefamationFacesFalseHostindia tv techInformationlatest newslawsuitlawsuit on chatgptlawsuit on open aiopeai ceoopen ai newsOpenAIopenai founderopenai suedRadioradio host sues chatgptsam altmansam altman indiaTech NewsTechnology
Comments (0)
Add Comment