What is meant by hallucinating chatbots? Everything you need to know about hallucinating chatbots!

Prabhakar Raghavan, Google’s chief search officer and senior vice president, said the artificial intelligence in chatbots can sometimes cause hallucinations. This statement was made on February 11, and just a few days later, beta testers of Microsoft’s Bing chatbot received alarming accusations from AI.

Meanwhile, Microsoft and Google have been rolling out their AI-enabled chatbots to test users.

Additionally, Alibaba and Quora had been considering bringing in their own AI chatbots.

Awesome Chatbots – An Introduction!

When a machine gives answers that are convincing to hear, but are completely false and made up, that is what is called hallucination. This hallucination is a novel phenomenon today. Developers have warned about AI models that offer completely false facts. AI models that answer queries with false answers and no facts are something to fear.

In the year 2022, Meta launched BlenderBot3, a conversational AI chatbot. The company expressed that the chatbot is equipped with the technique of browsing the Internet to chat with users on any concept. Furthermore, the company also assured that the chatbot would gradually improve its security and skills with the help of valuable feedback from users.

However, it would be a mistake to overlook the fact that at the time, Meta’s own engineers warned against blindly trusting chatbots for conversations involving factual information. This is because, in such a situation, the chatbot can apparently hallucinate.

Have you ever been amazed by chatbots? Okay, yes! Back in 2016, Microsoft’s chatbot Tay made a mind-blowing big mistake after being active on Twitter for about 24 hours. In fact, the chatbot began parroting misogynistic and racist insults to its users. The chatbot had been designed for conversational understanding. However, it was not difficult to manipulate the chatbot by users. All you had to do to manipulate the chatbot was simply ask it to “repeat after me.”

The reasons behind these hallucinations with chatbots

Simply put, hallucinations can occur because these generative natural language processing (NLP) models need the ability to rephrase, summarize, and demonstrate complex texts without restrictions. This raises the issue of facts not being entirely sacred. These facts can be treated contextually while examining the data.

An AI chatbot could perhaps use widely available information as input. The problem is exacerbated in cases where arcane source material or complicated grammatical text is used.

Who is Susan Wojcicki? YouTube CEO, who led the company for 9 years, resigns!

Categories: Optical Illusion
Source: ptivs2.edu.vn

Leave a Comment