Soumya Prakash Pradhan

Tech experts have recently expressed concerns about AI chatbots "hallucinating," which means providing fabricated answers that appear convincing.

This has raised questions about the reliability of these systems.

Industry leaders like Prabhakar Raghavan from Google and Sam Altman from OpenAI have shared their worries about chatbot reliability.

AI Chatbots and Hallucinations

During a conversation with a German publication, Prabhakar Raghavan, Google's Senior Vice President and Head of Google Search, expressed his worries regarding AI chatbots "hallucinating."

He warned that although these chatbots can provide convincing answers, they are also capable of generating entirely false information.

Raghavan emphasised the need for the AI community to address this challenge and minimise inaccuracies.

The Defamation Lawsuit

According to a report by Business Insider, a lawsuit filed in a Georgia court has highlighted the possible legal ramifications of content generated by OpenAI's ChatGPT, bringing attention to this issue.

Radio host Mark Walters sued OpenAI after ChatGPT allegedly generated a fake legal document accusing him of fraud and money embezzlement.

The lawsuit was filed in a Georgia court and claims that ChatGPT falsely involved Walters' name in a case in which journalist Fred Riehl was researching.

Walters alleges that Riehl provided a link to ChatGPT, asking it to summarise a Washington case.

However, ChatGPT incorrectly dragged Walters' name into the case, stating that he was the treasurer and CFO of the Second Amendment

Foundation (SAF) and had been accused of misappropriating funds and manipulating financial records for personal expenses.

The Business Insider report indicates that ChatGPT went even further by fabricating an entire complaint that had no resemblance to the actual case, including an incorrect case number as confirmed by the Georgia court.

The Implications

The legal consequences of relying on AI chatbot-generated content can be substantial.

In a report by the BBC, it was revealed that a New York lawyer faced a court hearing after it came to light that his colleague had utilised ChatGPT for legal research.

During the proceedings, it was discovered that the lawyer and his firm had cited numerous legal cases in an ongoing case that were entirely fictional.

The judge deemed this incident as an "unprecedented circumstance," highlighting the potential dangers of relying solely on AI-generated information within a legal context.

This case emphasises the significance of verifying and corroborating AI-generated content in legal proceedings.

scrollToTop