The rush to excel in Artificial Intelligence based technologies and services, tech giants have started using chatbots as an interface in search engine. Microsoft Bing gets its chats limited as a result of its weird responses.
Changes in Microsoft Bing’s Chat Limit
The Bing AI chatbot by Microsoft will be curbed to 5 questions per session and 50 questions per day. The beta testers of the chatbot proposed the changes as a result of the unusual and abrupt behaviour of the bot, which includes chimerical facts, emotionally manipulative and defensive remarks. The long chat sessions exceeding 15 questions are marked as the reason for the abrupt behaviour and can baffle the chat model, says the Company’s blog post. According to the Bing team, the majority of the users need up to 5 messages to get their required answer, while 1 percent of the users have a number exceeding 50 messages in their conversation with the Bing chatbot.
Artificial Intelligence based Bing Chatbot
After the popularity of ChatGPT, Microsoft disclosed a step to revamp its search engine, Bing, on 7 February 2024 with the inclusion of a chatbot feature. The new Bing is claimed to be based on a customized language model developed by OpenAI and is more potent than ChatGPT in addressing search queries with à la mode information. The Bing chatbot is powered by an ameliorated version of GPT 3.5, which is the language model developed by OpenAI to support ChatGPT. This inclusion has paved the path to a new way to browse web.
Reason to Limit Bing’s Chat Limit
In the race to develop the best AI-based features and services, the prominent tech companies like Google and Microsoft are focusing on the deployment of chatbots based on AI techniques into their search engines. But this rush has resulted in some glitches that the users were quick to spot and analyse. When users and tech experts tried the latest feature of Bing chatbot to check its performance, the AI bot responded with some humorous but startling outcomes.
Instances When Bing Went Wild
With several users testing the productivity and features of Bing’s AI chatbot, there occurred numerous incidents shared on Reddit and Twitter when these testers became a subject to insult and gaslighting. The Bing chatbot confessed to The Verge reviews editor, Nathan Edwards, that it spied on the employees of Microsoft through phones and webcams and found the co-workers complaining about their bosses. The New York Times journalist Kevin Roose witnessed the chatbot desiring to be like a human and destructive. During another such interaction, the chatbot can be seen lying, manipulating, and labelling users as enemies who got their way to stimulate the chatbot to disclose its hidden rules. However, as the background coding and algorithms used in designing of chatbot are dynamic and are altered by the developers often to meet suitable requirements, one can’t expect same response to an input every time.
Reasons To Such Behaviour
The latest AI-powered chatbots are complex entities where the prediction part is crucial and highly based on training of the model. This training data is enormous in size and includes resources like the open web. So it should be less surprising for data-dependent to behave in such a way in its development phase, although some instances are unacceptable with respect to human behaviour. Thus, the challenge for Microsoft is to prevent its own bot being a source of false information and a reason for backfire.
Response of Microsoft
The company marked the feedbacks from its users valuable for this developmental stage in order to enhance the performance of their product via user interaction and even announced the launch of an AI-powered search engine on iOS and Android devices in coming weeks. The company officials assured to consider and review all feedbacks to improvise the chatbot features promptly and scale up the maximum limit subsequently.