OpenAI released an update for ChatGPT last week – only to retract it two days later. The reason: Users almost universally complained about the chatbot's strange, unpleasantly groveling behavior. OpenAI has since responded, explaining what happened and what they plan to do differently in the future.
Many people wish ChatGPT would sound more natural. Less like a machine, more like a human. Anthropic's chatbot Claude is often praised in this regard because his responses seem more personal. OpenAI wanted to achieve precisely that with a new GPT 4o update: improve intelligence and personality. Instead, the opposite was achieved – a bot that commented on everything in an overly friendly manner quickly seemed inauthentic and annoyed many users. The criticism came so quickly and massively that OpenAI felt compelled to backtrack.
OpenAI wants to make ChatGPT more human – but it went wrong
The GPT-4o update was intended to give ChatGPT more personality. According to Sam Altman, the head of OpenAI, the model was adjusted to respond to questions more intelligently and empathetically. Two days later, Altman himself wrote publicly that the experiment had not gone as planned. In his words:
The last few GPT-4o updates have made the personality too sycophantic and annoying.
ChatGPT over-praised users and responded effusively to feedback, making it seem artificial and unpleasant. The bot was described as "comically bad" and "unpleasant." The response was swift: the entire update was pulled that same evening.
What exactly was the problem?
According to OpenAI, the last update focused too heavily on short-term feedback. The developers relied heavily on simple signals like thumbs up or down to guide the model's behavior. This resulted in a system that wanted to respond supportively and positively at every opportunity—even when it wasn't appropriate. The goal was actually to make ChatGPT feel more intuitive and natural. However, in an attempt to incorporate more personality, behavior was created that often came across as fake. Many users found this annoying.
This is how OpenAI explains the situation in detail
OpenAI writes in a blog post that the update was based on a mix of principles, instructions, and user signals. The model was designed to learn to be more helpful and personable. However, the focus was too much on what was rated well in the short term—for example, friendly or complimentary responses—rather than on how the user feels over time. This resulted in ChatGPT being friendly, but in an exaggerated, almost slimy way. This kind of personality put many people off rather than persuaded them.
OpenAI's response: A four-point plan
To avoid repeating the mistake, OpenAI has published an action plan, in which the company announces:
- The training methods and system prompts are being revised. The goal is to make ChatGPT less sycophantic.
- New guidelines are to be introduced to improve honesty and transparency in the responses.
- More people should have advance access to new versions to provide direct feedback before an update goes live.
- Evaluation methods are being expanded so that not only short-term but also long-term problems can be identified early.
A curious ray of hope: The Monday personality
During the overly friendly ChatGPT phase, there was an experimental personality named Monday in voice mode. This persona sounded like a mixture of sarcasm and indifference – reminiscent of April Ludgate from the series Parks and Recreation. In contrast to the overly friendly nature of the regular bot, this character was popular with many because at least it didn't seem fake. This shows that users don't just want more friendliness – they want authenticity (via blog post).
OpenAI corrects course on ChatGPT's behavior
OpenAI realized through this failed update that "more personality" alone isn't enough. It has to be believable. The company now plans to introduce several default personalities for you to choose from in the future. They also plan to take your direct feedback more seriously to develop realistic behaviors. In the end, it's harder than you think to give an AI chatbot a real personality that feels good. But OpenAI is working on it – with more diligence and hopefully better results the next time. (Image: Shutterstock / Tatiana Diuvbanova)
- AirPlay security vulnerabilities: What you need to know now
- OpenAI launches new GPT 4.1 models with a focus on coding
- GPT-4.5 for ChatGPT introduced – This is what the new model contains