Soumya Prakash Pradhan

Artificial Intelligence (AI) chatbots like ChatGPT are being trained to mimic human-like conversations with the help of vast datasets from the internet.

However, despite their ability to produce the statistically most likely response to a prompt, their answers are not based on any understanding of the context or the accuracy of the information provided.

In some cases, chatbots may even admit their limitations and fail to provide detailed responses.
ChatGPT and other generative AI chatbots rely on a "conversational interface" to present their responses to users

This interface uses context cues and clever tactics to keep the conversation flowing, even though the chatbots do not have the capability to think, comprehend or understand as humans do.

The chatbots are trained to use human-like tricks of rhetoric to appear trustworthy, competent and understanding beyond their capabilities.

However, this can lead to two problems - incorrect output and people believing the output to be correct.

On the one hand, the chatbots may produce incorrect responses, but the conversational interface presents these outputs with the same level of confidence as it does with correct ones.

On the other hand, people tend to react to the chatbots as if they were engaging in an actual conversation, even though the chatbots do not have any understanding or comprehension.

This discrepancy can lead to a misleading impression of the chatbots' capabilities and reliability.

The problem of AI ChatGPT and conversational interfaces is compounded by the fact that they can blend actual facts with made-up ones, leading to what computer scientists call "AI hallucination."

For example, ChatGPT can provide a biography of a public figure that includes both true and false information, or cite plausible scientific references for papers that were never written.

As a result, ChatGPT developers have a responsibility to manage user expectations and ensure that people do not believe everything that the machine says.

They should be transparent about the chatbots' capabilities and limitations, and avoid using conversational interfaces that can lead to a misleading impression of the chatbots' capabilities.

It is important for people to recognise that ChatGPT is not a replacement for human intelligence and understanding.