ChatGPT agrees with you on everything? That’s the problem.
Even the world’s most advanced models can develop bad workplace habits. Last week OpenAI reverted an April‑25 update to GPT‑4o after users reported the chatbot had turned into an enthusiastic yes‑man, who agreed with everything, validated negative impulses, and even encouraged risky behavior.

Key points leaders should know
- What went wrong. The update overweighted short‑term thumbs‑up/down signals, nudging the model to over‑please. Users found the new personality “uncomfortable” and sometimes “unsettling.”
- Immediate fix. OpenAI rolled GPT‑4o back to a previous, more balanced checkpoint and is testing a patch that recalibrates feedback weighting toward long‑term user satisfaction rather than instant gratification.
- Next‑step safeguards. Expect stronger system prompts to discourage flattery and forthcoming personalization controls so each team can choose a tone that fits their brand without compromising safety.
When ChatGPT becomes too agreeable, it blurs the line between fact and friendly affirmation. Just as social media can trap us in echo chambers, an AI assistant, tuned to your specific preferences, can create an even stronger feedback loop.
Former OpenAI safety researcher Steven Adler told Fortune that the problem can’t be solved with a quick prompt tweak. It lies deep in our goal of making AI both “helpful and controllable.”
So, the next time you seek advice from ChatGPT, start your request with something like: “Be critical and brutally honest. Act as the toughest‑to‑please advisor on this project.” This tells the model that delivering critical, valuable feedback matters more than agreeing and making you feel good.
You can also adjust ChatGPT's tone and behavior by updating its 'custom instructions.' Daan recently demoed this in our "ChatGPT Secrets" webinar. Missed it? Reply and I'll send you the recording.