Why are “hallucinations” so dangerous in artificial intelligence?

Hallucinations in the context of artificial intelligence generally refer to situations where a model generates information that is not accurate or relevant.

These behaviors can manifest themselves in a variety of ways, such as generating incorrect responses or producing false or inconsistent content.

For example, OpenAI firm’s artificial intelligence chatbot, ChatGPT, noted in one paragraph:
“The coronation ceremony took place at Westminster Abbey in London on May 19, 2023. The abbey has been the site of the coronations of British monarchs since the 11th century, and is considered one of the most sacred places and emblematic of the country.”

This information is incorrect given that said event occurred on May 6, 2023.

The ChatGPT -3.5 system warns that its ability to generate responses is restricted to information available on the internet until September 2021, which implies the possibility of facing challenges in providing accurate answers to queries.

OpenAI explained at the launch of GPT-4 that there are still many limitations such as social biases, potential hallucinations, and contradictions in answers.

A second error or hallucination was detected in the Bing search engine, in which the alleged theory about the emergence of search algorithms, attributed to Claude Shannon, was presented.

The result provided some quotes in an attempt to support the research article.

However, the main problem was that Shannon had never written such an article, since the quotes provided by Bing turned out to be fabrications generated by artificial intelligence.

Generative artificial intelligence and reinforcement learning algorithms have the ability to process vast amounts of information on the Internet in a matter of seconds and create new texts that are often coherent and well-written.

Many experts warn that users need to be cautious when considering the reliability of these texts. In fact, both Google and OpenAI have asked users to keep this in mind.
In the case of OpenAI, which maintains a collaboration with Microsoft and its search engine Bing, they point out that “GPT-4 has a propensity to ‘hallucinate’, which means that it can generate meaningless content or false information in relation to certain sources. ”.

Risks of hallucinations
The potential dangers of hallucinations in an artificial intelligence are several and can have significant impacts. Some of the most important risks include:

Disinformation and the spread of false information: If an AI generates false or misleading information, it can contribute to the spread of misinformation, which can be harmful in a variety of contexts, such as the spread of fake news or the generation of inaccurate content.

Loss of credibility: When an AI-powered chatbot regularly generates inconsistent or incorrect content, it can lose the trust of its users, limiting its usefulness and effectiveness.

Biases and prejudices: Hallucinations can lead to the generation of content that reflects biases present in the training data, which would be considered discriminatory or harmful to certain groups.

Difficulty in critical applications: such as in medical or legal decision making, hallucinations can have serious consequences, since the information generated must be accurate.

Ethical issues and liability: AI developers and owners may face ethical and legal challenges if AI generates inappropriate or harmful content.