The relevance of AI chatbots
Chatbots from major tech giants, which are based on generative AI and are also aimed at private individuals, have only been around in the form in which we know them today - and yet it is already difficult to imagine everyday life without them. Quickly get a few travel tips for your next vacation from ChatGPT or have a programming task done for work — Chatbots have found their place in the digital world. Chatting itself has long been preferred among users due to years of use of platforms such as WhatsApp, Facebook Messenger or Telegram as a communication channel, so that even the high acceptance of chatbots is not surprising. In recent years, more and more areas of application for chatbots have therefore been developed, in which chatbots ensure that processes are automated, streamlined and improved.
The 6 biggest chatbot fails
As with any technology, the motto is AI chatbots: Handling needs to be learned. Has which Generative AI Too much freedom in its design and execution and is not controlled enough, it sets itself up on its own and sometimes plays out unwanted information. This has happened several times in the past, including when using ChatGPT, Microsoft Copilot (Bing Chat) and other chatbots from well-known companies.
Fail #1: ChatGPT sets precedents
The chatbot ChatGPT from OpenAI has permanently changed the chatbot landscape and ensured that the general public is concerned with the topic of artificial intelligence and AI chatbots. ChatGPT's capabilities are undoubtedly impressive — but using the tool should be treated with caution. Since the release of ChatGPT 2022, the chatbot has had conversations from time to time — often in the ignorance of users — Played out information that was incorrect, fictitious or even discriminatory. In addition to more harmless conversations in which ChatGPT answers a catch question incorrectly or loses games like Tic Tac Toe, there are also serious misunderstandings that can arise. An example of this is the following event:
Like the Tagesschau reports, a lawyer in New York used ChatGPT to research a case by having the bot list precedents. The chatbot then provided specific information, including a file number, about cases such as “Petersen versus Iran Air” or “Martinez versus Delta Airlines.” In hindsight, it turned out that these cases of ChatGPT fictitious were. The lawyer must now answer for his conduct in court. Unfortunately, this situation is not an isolated case. This was also reported by the Washington Post of a serious ChatGPT scandal in which ChatGPT in the context of sexual harassment incorrect information made.
Fail #2: Microsoft Copilot (formerly Bing Chat) gets emotional
Microsoft Copilot, especially known under the old name Bing Chat, has also had one or the other fail. The user Kevin Lui gave an example on platform X (formerly Twitter) that the chatbot inappropriate emotions can show. Microsoft Copilot (Bing Chat) became angry when he talked to him because he asked him the same question several times, did not call him by his real name and claimed that the chatbot was lying. The chatbot was annoyed by the user's behavior and finally admitted that it was unable to adequately discuss the topic with him.

Fail #3: DPD's chatbot becomes abusive
The chatbot from parcel delivery company DPD, which was used in addition to the service staff in online support, had a similar emotional outburst to recurring questions to be answered by customers.
However, due to a new update, the chatbot began to criticize your own company and in front of customers Swear words to use. The experience of DPD customer Ashley Beauchamp With the chatbot, which he shared on social media, went viral all over the world. In conversation with Ashley Beauchamp, the AI described DPD as the “worst delivery company in the world.” Although DPD immediately deactivated the responsible AI element and updated it, damage to the image is not that easy to repair.

Fail #4: NEDA's chatbot gives inappropriate advice
The US association NEDA, short for National Eating Disorders Association, also had to deal with a chatbot scandal in May 2023. The non-profit organization introduced its chatbot Tessa with the aim of replacing the eating disorder hotline, which was previously managed by employees. However, the chatbot played users Weight loss advice off. Since recommendations of this type can be extremely problematic for those affected by eating disorders and can trigger pathological behavioral structures, Tessa had to be taken offline again after a short time. The outcry on social media was huge, like the Instagram post by user Sharon Maxwell clarifies:

Fail #5: Chevrolet's chatbot can be manipulated
Car dealer Chevrolet in Watsonville actually had good intentions when they added a chatbot to their website. The purpose of artificial intelligence was to relieve service employees and to customer service to improve. However, users discovered after some time that the Chatbot can be manipulated is and can be persuaded to answer “yes” to even the most absurd suggestions. For example, user Chris Bakke got the bot to confirm the car purchase of a 2024 Chevy Tahoe for one US dollar as a final deal. Den Incident documented by Chris Bakke on X (Twitter) and thus caused significant damage to the car dealer's image:

Fail #6: Air Canada's chatbot gives false information
Even large and globally active companies such as Air Canada have already made chatbot failures, which illustrate that uncontrolled artificial intelligence can have undesirable consequences:
Jake Moffatt booked a flight from British Columbia to Toronto with Air Canada in November 2022 to attend his grandmother's funeral and was confirmed by the airline's chatbot that he could get a special discount for bereavement afterwards. It then turned out that the chatbot incorrect information made so Jake Mofatt pointed this out to Air Canada via email. Although the airline admitted the mistake, it did not arrange for a refund.
Now, a court ruled that Air Canada was responsible for both static and interactive content, including chatbots, on its website. The customer must be able to rely on finding correct information on the site. In Jake Moffatt's case, Air Canada therefore had to arrange for reimbursement and pay the court costs.
Why these fails are so problematic
The chatbot fails from ChatGPT, Microsoft Copilot (Bing Chat), DPD, NEDA, Chevrolet and Air Canada show how problematic AI chatbots can be with a lack of content control.
On the one hand, it can damage the image and business of companieswhen unwanted information is published about the chatbot. An AI that sells cars for a dollar or describes its own company as the worst in the world can ensure that companies lose the trust of their customers on the one hand and incur financial losses on the other.
On the other hand, users also suffer damage if they incorrect, contrived, vulgar, or discriminatory content received. The example of NEDA's chatbot failure shows that playing out unsolicited answers can even have health effects.
Tips: How to avoid chatbot fails
After all the fails, here is the good news: Companies can avoid scandals of this kind by working with the Choosing your chatbot provider Make sure that this is the right criteria fulfilled. This is similar to Moritz Beck from Memacon. He pointed out in his LinkedIn post Based on the fact that an incident such as that happened at Air Canada would not have happened with a secure, controllable and GDPR-compliant AI chatbot:

Here are four tips to help you avoid any chatbot fails and find the right chatbot provider for your business.
Controlled AI
We never tire of stressing it out: Trust is good, control is better. Generative AI offers impressive, creative capabilities: It is able to generate content (text, images, code, etc.) independently, which has the advantage that it does not have to be created and entered manually. However, in order to avoid playing out unwanted, business-damaging and fabricated information, the chatbot provider must offer the option of Manually manage and change chatbot content in the backend or in an interface — This is the only way a company can maintain control.
In fact, a chatbot without control hallucinates whenever an unknown topic doesn't match the patterns it has learned. In this case, he prefers to think things up in order to be able to answer the question at all. With controlled GenAI, this cannot happen: In this case, the chatbot's response texts are generated automatically, but in contrast to ChatGPT & Co., these are not directly played out to users. The AI therefore creates several concepts, content texts, etc. and suggests them to the chatbot owner. The chatbot owner, i.e. the person who manages the chatbot's content (an employee of the company), can now decide for himself whether the answers should go online in this form or should be adjusted.
GDPR compliance
Another important criterion for avoiding chatbot fails is the topic data protection. Chatbots from companies such as OpenAI, whose servers are located in the USA and possibly third countries and which do not communicate transparently about their data processing, are not GDPR-compliant. It is therefore a good idea to select a chatbot provider that has Servers in Germany hosts that encryption made possible by data and security measures such as Two-factor authentication or the individual setting of access rights guaranteed on the chatbot's content.
Select the appropriate chatbot type
Not all chatbots are the same: So there is rule-based or AI-based chatbots, the option of hybrid chatbots from bot and live chat, a connection of chatbot and other communication channels (e.g. WhatsApp Business or Telegram) or the integration of a chatbot into existing tools such as shop systems or CRM software. In order to find out which solution is best for your own company, you should determine for yourself what your specific use case looks like: What goals should be pursued with the chatbot? What needs should the chatbot meet? Get in touch with chatbot providers and get free and non-binding advice to find the right provider for you and avoid chatbot fails of any kind. For more tips that are helpful for introducing a chatbot, read our article”Introducing a chatbot: How to do it in 6 steps”.
Transparency & human take-over
Another important point to prevent chatbot failures is the issue of transparency. So for chatbot users no false expectations are raised, the design and content of the chatbot should clearly communicate that the bot is not a real person, but a technology. Chatbots should not answer every question at all costs, but should primarily process simple, recurring inquiries. It is a good idea to always give users the opportunity to talk to an employee — whether by telephone, e-mail, live chat or another communication channel. This avoids misunderstandings about what the chatbot can and cannot do. You can find more information on this topic in our article Human Takeover: The Chatbot Human Handover.
conclusion
Chatbot fails, such as those from ChatGPT, Microsoft Copilot (Bing Chat), or other chatbots that are based on generative AI but offer no control options, illustrate how important it is to choose the right chatbot provider. In order to be able to successfully use an AI chatbot in a company in customer communication without running the risk of a chatbot failure, the provider must offer the ability to manage and customize chatbot content. Only with controlled artificial intelligence Companies create added value for their customers in customer communication and at the same time improve their internal processes by increasing efficiency and streamlining processes.
Would you like to find out what controlled AI looks like in practice? Create your chatbot prototype with moinAI in just four steps - free of charge and without obligation, of course.
