Opinion

Inventiveness of ChatGPT poses risk of defamation

Inventiveness of ChatGPT poses risk of defamation

15th Jun 2023

The sudden emergence of ChatGPT and artificial intelligence (AI) chatbots as a feature of everyday life has opened up a new frontier in digital communication and content creation. However, the capacity of the technology to create false information raises the threat that those who disseminate such falsehoods can be sued for defamation.

What is ChatGPT?

ChatGPT is a generative AI platform created by OpenAI. It was only launched in November 2022, but has created a huge shift in our understanding of the capabilities of generative AI.

Within two months it had a million users.

ChatGPT concedes on its website that it sometimes writes ‘plausible-sounding but incorrect or nonsensical answers’, but the company is working to perfect the program.

Many people are using the chatbot’s responses extensively for content in publications, not just on social media but also in conventional media and book publishing. Students are using it for assignments and homework.

But answers to questions put to ChatGPT and similar AI chatbots must be taken as unreliable in terms of factual accuracy. It would be wise to double check the facts before using such material in any way. (See What is ChatGPT and why does it matter? Here’s what you need to knowZD Net, 18 April 2023.)

ChatGPT mistakes mayor for criminal

The mayor of a Victorian country town recently discovered that an artificial intelligence chat generated on ChatGPT was falsely stating he had been imprisoned for bribery.

But the true story was that the mayor had been the whistleblower in the bribery case. He was the good guy, not the bad guy.

The mayor’s lawyer sent a letter to ChatGPT’s California-based owner, OpenAI, to fix the errors or be sued for defamation. (See Hepburn mayor may sue OpenAI for defamation over false ChatGPT claimsABC News, 6 April 2023.)

ChatGPT attributes fictitious articles to famous publications

It is not the only case. An American academic learned that in a ChatGPT chat he had been accused of sexual harassment during a trip to Alaska with Georgetown University law students in a 2018 article in The Washington Post.

The academic had never been to Alaska with students, never taught at Georgetown, and the Post article didn’t exist. (Please see ChatGPT reportedly made up sexual harassment allegations against a prominent lawyerInsider, 7 April 2023.)

The Guardian newspaper in the UK was puzzled when a researcher contacted them asking about one of their online stories. It looked like a Guardian story, but the newspaper discovered the story didn’t exist.

It had been generated by ChatGPT in response to a question by the researcher. (Please see ChatGPT is making up fake Guardian articles. Here’s how we are responding, The Guardian, 6 April 2023.)

Can ChatGPT be sued for defamation?

So, can a person sue AI for defamation?

The creator or owner of the AI chatbot is liable for publishing defamatory material unless they can prove it is true. It is not a defence to say the AI gave a fake source, so users should check the facts before publishing.

The chatbot AI is not designed to defame or lie. It is a result of programming. When asked a question, the AI scours massive amounts of text to recognise patterns and formulate a reply. It can repeat falsehoods it finds, but it might also invent false information to fit the text it is generating for the answer.

The law is in a constant grey area when it comes to new technology that develops faster than laws can be updated.

Defamation laws are being revamped to include things like search engines and now AI. The law will inevitably catch up, but it may take years before legislation gets implemented. It will be even longer before cases get decided and create common law.

Repeating or sharing defamatory texts or material generated by AI would amount to publication as understood in the context of defamation law, and leave the person who disseminates it liable to legal action.

Difficulties in pursuing defamation actions against ChatGPT

One of the main difficulties in pursuing a defamation action against an online publication is jurisdiction. The ChatGPT owner is in the US, but the victim is in Australia.

Claimants need to establish they suffered serious harm by the defamatory publication. This may be difficult where there was only a small audience for the chatbot’s answer to a user’s question.

Indeed, an AI chatbot is likely to give a different answer when asked the same question by different people, so it may even be difficult to establish the size of the audience.

The problem is that once a chatbot generates something false and defamatory and it appears on the internet, other bots can pick it up and repeat it.

This is an edited version of an article first published by Stacks Law Firm.

The ALA thanks Mark Shumsky for this article.

Mark Shumsky has broad experience across a number of areas of law, including commercial and corporate, property and planning, media and defamation, associations and charities, family, employment, administrative and succession law. He has experience consulting barristers, solicitors, experts, government and tribunals in disputes related to trust, employment, commercial, media, migration, planning and community matters. In addition to being a solicitor, Mark is the editor and publisher of the Free Thought newspaper – a Ukrainian newspaper that has been published in Australia since 1949.

The views and opinions expressed in this article are the author's and do not necessarily represent the views and opinions of the Australian Lawyers Alliance (ALA).

Learn how you can get involved and contribute an article.

Tags: defamation Artificial intelligence ChatGPT OpenAI AI Mark Shumsky