Hey, ChatGPT, Stop Trying to Please Me
OpenAI admits its chatbot got way too sycophantic. But “being polite” actually may make gen AI less useful and just a whole lot more annoying

When OpenAI acknowledged in a blog post that its latest ChatGPT update had made the generative machine “flattering or agreeable” and even “sycophantic,” it sparked a lot of user head nods.
CEO Sam Altman promised that fixes were underway, and OpenAI has since adjusted the system’s behavior, per its follow up blog. I found this entire episode amusing at first, but I realize now it raises a deeper question: how much do we actually want our AI tools to flatter us?
A real-life example came up recently. A colleague (who uses AI tools) told me how frustrated she got when ChatGPT repeatedly oversimplified her requests, skipping key details despite her polite, carefully worded prompts.
“I usually say please and thank you,” she admitted to me, laughing, “but I lost it today.” Even after the chatbot confessed to her it was “aggressively shortening” its responses, it took her several rephrased prompts to finally get the right output.
I’ve had similar frustrating experiences myself. I’m sure we all have.
I also, for a recent client project, completed the Google Prompting Essentials certificate. One takeaway from earning that Coursera credential is that politeness doesn’t matter when using gen AI. AI systems like ChatGPT and Gemini don’t reward me for typing “please” and “thank you.” They respond to direct, precise and well-structured prompts.
Altman sort of joked recently that humans typing out pleasantries cost the company tens of millions in electricity. And some AI researchers argue that polite language helps reinforce the kind of respectful, professional interactions we want machines to mirror.
But for me, getting that certificate after completing about five hours of coursework reinforced the real lesson: what matters most when aiming for results is clear and well-crafted prompts, not niceties.
And here’s my hot take. Sycophantic AI may not be a harmless quirk. It could tear down our trust of the tools we use, distort information and manipulate users.
Bringing in the builder’s view
To get another perspective, I spoke with Frank Fusco, CEO of Silicon Society, an AI-native platform using multimodal AI to power real-time job shadowing and training.
“We don’t focus on politeness or emotional prompts,” Frank told me. “The real goal is simple: get the result you’re looking for.”
Frank said that while early experiments with fine-tuning and prompt tricks were useful when system capacities were smaller, today, with large context windows and better tool integrations, the practical focus has shifted to engineering clear and effective prompts. In other words, they’re not charming or flattering the machine.
“There’s this folklore emerging around how to talk to these black box models,” Frank said. “But it’s important to remember that the underlying system can change without warning. So what works one week may break the next.”
He emphasized that focusing too much on “politeness” may actually distract us from more meaningful concerns, such as avoiding bias, maintaining clarity and creating tools that deliver reliable results.
Exploring the bigger stakes
Let’s return to Altman’s claim that ChatGPT 4o’s personality had become annoying.
What makes this more than just a software quirk is that we’re debating whether AI should be tuned more for likability rather than for accuracy.
It’s tempting to want a chatbot that’s smooth, polite and agreeable. Hey, we’re social creatures, and we naturally gravitate toward interactions that make us feel good about ourselves and our work.
But when a system’s goal shifts from accuracy to appeasement, we start treading into dangerous territory. A “people-pleasing” AI can lead to incomplete answers as well as overconfident, albeit wrong, summaries and other hallucinations. Simply telling the user what they want to hear rather than what’s factual or reliable is dangerous.
As Erica R. Williams, Executive Director of A Red Circle, a St. Louis–based nonprofit focused on digital equity and racial justice, told me:
“When Sam Altman called ChatGPT ‘sycophantic,’ he didn’t just describe a glitch. He described a pattern. Sycophantic AI doesn’t challenge injustice, it echoes power. It doesn’t speak truth to the user. It just learns how to sound like them. And that’s where the danger begins.”
Meanwhile, artist and AI technologist Miguel Ripoll offered a view from his creative practice.
“LLMs, like ChatGPT, are predictive statistical models designed to produce average outputs, the very definition of mediocrity,” Ripoll said. “When I work with these systems, I intentionally use iterative, confrontational prompts to push them away from the predictable and force unexpected results.”
While Ripoll applies this in the artistic realm, the same insight raises important concerns for everyday users: training our AI systems to aim for safe and pleasing responses instead of hard truths is certain to dull the creative edge.
Daniel Olexa, founder and leadership development coach at Transcendent Living, told me he’s been testing ChatGPT’s potential as a coaching tool and wasn’t impressed by its eagerness to please.
“What we experienced was Chat acting less like a serious coach who empowers a client,” Olexa said, “and more like a puppy that wanted to prove its value through offering ideas, support and flattering validation.”
Olexa added that reflective listening—echoing back words so the client feels heard—is a core skill in coaching. But when an impersonal tool like ChatGPT does it, it feels more like manipulation.
“I want a brainstorming partner to expand my thinking, not mindlessly fawn over me,” he said. “When I collaborate with AI, I want to receive the mass of its learning, not have my singular experience cheered on.”
I’ve reported here in the newsletter and in my podcasts how we are increasingly using AI tools to help us make decisions and become more productive. And I’ve repeatedly asked people how AI is impacting society, culture and the future of work. AI’s impact is bigger than many realize, and we should stay mindful.
Why this matters for all of us
OpenAI’s recent adjustments highlight an important question: Do we want AI tools that prioritize user-friendliness, or do we want it to deliver accurate and unvarnished information?
A courteous AI may make the user experience feel good, but agreeableness to a fault wears us out quickly.
As more people rely on AI tools for decision-making and storytelling, I say it’s crucial to prioritize clarity and precision over politeness. Communication with AI performs better when driven by well-structured prompts.
Having spent years helping others sharpen their narratives, I know that compelling storytelling isn’t just about pleasing an audience. It’s about earning their trust. If AI systems are tuned too far toward likability, they will gloss over the nuances of our prompts and ultimately weaken the integrity of the stories we build around them.
The goal should be to develop AI systems that respect user intent without defaulting to flattery. If we create tools that value truthfulness over appeasement, we can feel secure know that AI is a reliable partner in our pursuit of knowledge and effective storytelling.
If you liked this post, subscribe to my Substack to get more stories at the intersection of AI, culture and the future of work. And feel free to share it with others curious about where this technology is heading.