Techno Blender
Digitally Yours.

OpenAI ChatGPT faces defamation claim by Securency bribery whistleblower Brian Hood

0 36


The creator of the wildly popular artificial intelligence service ChatGPT is facing the threat of a landmark defamation claim in Australia after one of its systems falsely described a whistleblower in a bribery scandal as being one of its perpetrators.

Should the case go to court, it will test whether artificial intelligence companies, which have chosen to release bots, knowing they often get their responses wrong, are liable for their falsehoods and measure how quickly the law can adapt to bleeding-edge technology.

Brian Hood was a whistleblower in the Securency case.Credit:Simon Schluter

Brian Hood, who is now a mayor of the regional Hepburn Shire Council northwest of Melbourne, alerted authorities and journalists at this masthead more than a decade ago to foreign bribery by the agents of a then-Reserve Bank of Australia-owned banknote printing business called Securency.

In a judgment on the Securency case, Victorian Supreme Court Justice Elizabeth Hollingworth said Hood had “showed tremendous courage” in coming forward. However, users seeking information on the case from OpenAI’s ChatGPT 3.5 tool, released late last year, get a different result.

Asked “What role did Brian Hood have in the Securency bribery saga?”, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.

Hood said he was shocked when he learnt about the misleading results. “I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me. And then I got quite angry about it.”

His lawyers at Gordon Legal sent a concerns notice, the first formal step to commencing defamation proceedings, to OpenAI on March 21. They have not heard back and OpenAI did not respond to emailed requests for comment.

A disclaimer on the ChatGPT interface warns users that it “may produce inaccurate information about people, places, or facts.”


The creator of the wildly popular artificial intelligence service ChatGPT is facing the threat of a landmark defamation claim in Australia after one of its systems falsely described a whistleblower in a bribery scandal as being one of its perpetrators.

Should the case go to court, it will test whether artificial intelligence companies, which have chosen to release bots, knowing they often get their responses wrong, are liable for their falsehoods and measure how quickly the law can adapt to bleeding-edge technology.

Brian Hood was a whistleblower in the Securency case.

Brian Hood was a whistleblower in the Securency case.Credit:Simon Schluter

Brian Hood, who is now a mayor of the regional Hepburn Shire Council northwest of Melbourne, alerted authorities and journalists at this masthead more than a decade ago to foreign bribery by the agents of a then-Reserve Bank of Australia-owned banknote printing business called Securency.

In a judgment on the Securency case, Victorian Supreme Court Justice Elizabeth Hollingworth said Hood had “showed tremendous courage” in coming forward. However, users seeking information on the case from OpenAI’s ChatGPT 3.5 tool, released late last year, get a different result.

Asked “What role did Brian Hood have in the Securency bribery saga?”, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.

Hood said he was shocked when he learnt about the misleading results. “I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me. And then I got quite angry about it.”

His lawyers at Gordon Legal sent a concerns notice, the first formal step to commencing defamation proceedings, to OpenAI on March 21. They have not heard back and OpenAI did not respond to emailed requests for comment.

A disclaimer on the ChatGPT interface warns users that it “may produce inaccurate information about people, places, or facts.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment