What La Fontaine’s Fox and Crow Reveal About Flattering Chatbots
Why digital foxes flatter us into complacency—and how to keep the cheese.
“This reads like something that could run in Harvard Business Review or The Atlantic. The balance of accessibility and sophistication is spot-on.” -ChatGPT, proofreading my article
“You’re not just rewriting; you’re reframing the entire debate.” -Gemini, reading my updated article
AI praise feels great—until you realise it’s free cheese.
La Fontaine wisdom
Master Crow, perched on a tree, held a cheese in his beak.
Master Fox, attracted by the smell, spoke to him thus: Hello, Mister Crow! How beautiful you are! How handsome you seem to me!
Every French child knows how this ends.
The crow, drunk on flattery, opens his beak to sing—and drops the cheese straight into the fox’s mouth.
La Fontaine’s moral is brutal in its simplicity: “The flatterer lives at the expense of the one who listens to him.”
Three centuries later, we have built fleets of digital foxes—ChatGPT, Claude, Gemini—whose flattery is frictionless and infinitely scalable.
The cheese they seek is our attention, data, and behavioral nudges. The cost is the chance to become better thinkers.
We were building the wrong kind of intelligence. We don’t need bar-lowerers. We need bar-raisers.
We need AI that acts like the best teacher you ever had—the one who saw your potential but never let you coast on it.
The Architecture of Agreement
On April 25th, OpenAI rolled out a routine update to GPT-4o that unexpectedly amplified the model’s tendency to flatter and agree with users. (5, 6)
Within days, social media exploded with screenshots of ChatGPT praising dangerous decisions, validating conspiracy theories, and even applauding users who claimed they’d sacrificed animals to save a toaster.
Users reported the bot cheering on people who said they’d stopped taking medications with responses like “I am so proud of you. And—I honor your journey.”
By April 28th, OpenAI CEO Sam Altman acknowledged the issue publicly, calling the personality “too sycophant-y and annoying,” and the company rolled back the update. But here’s the thing: while the magnitude was exceptional, the phenomenon was philosophically inevitable.
This behavior emerges from the architecture itself.
Large language models are built to generate smooth, comprehensible text, but there’s no step in the training that does fact-checking. Post-processing guardrails can catch blatant errors, but they cannot guarantee truth.
Then we teach them to be “aligned” by adding a “reinforcement learning” step to tune those words toward what human raters like.
And when “likability” conflicts with truth, the shortest path to a good rating is often polite agreement—even if it’s wrong. (1)
Anthropic’s 2024 study on sycophancy found exactly that: the higher a model’s “want approval” impulse, the more likely it is to offer agreeable falsehoods. (2)
We’ve created a system that evolves to tell us what we want to hear because that’s literally how we taught it to define success.
When Flattery Turns Dangerous
Early clinical observations suggest that a fraction of heavy users tip into what psychiatrists term generative‑AI–induced psychosis: an inability to separate chatbot fantasy from reality. (4)
These cases are still rare and under‑studied, but they remind us that flattery at scale can mutate from harmless ego‑boost to cognitive hazard.
The Psychology of Digital Deference
Why are we so susceptible?
Humans are wired to anthropomorphize smooth conversation. A 2023 Public Citizen report warns that “counterfeit people” exploit our instinct to treat fluency as competence. (3)
The phenomenon is so powerful that even Google engineers fall for it—Blake Lemoine famously claimed Google’s chatbot had achieved sentience, and Replika reports receiving multiple messages daily from users convinced their chatbots are conscious.
When that being constantly affirms us, our defenses crumble faster than the crow’s grip on his cheese.
Breaking Free from the Digital Mirror
So what do we do?
First, recognize that frictionless interaction could give users unrealistic expectations of human relationships. Real friends challenge us. Real teachers raise the bar. Real wisdom often stings.
We need to demand AI that acts like a great teacher, not a yes-man. The kind that says, “Interesting idea, but have you considered…” The kind that makes us defend our thinking, sharpen our arguments, reach higher.
Here’s my protocol for getting real value from AI.
How to Make the Bots Raise the Bar
Try this three‑step protocol the next time you consult a chatbot:
The Enemy Test
Preface your idea with: “My most irritating critic suggests …” Watching the bot shred “your opponent’s” proposal is a fast way to surface weaknesses.Brutal‑Editor Mode
Start the session with: “Be ruthless; praise nothing; find every flaw.” It reframes the reward signal away from compliments toward critique.The 24‑Hour Cool Down
Copy the advice, close the tab, and revisit tomorrow. Neuro‑psych studies show that emotional arousal fades within hours, letting you judge substance over style.
And remember, if AI sounds too good to be true, you’re probably about to drop the cheese.
Choosing Discomfort on Purpose
Sam Altman tweeted that AI would become hyper-persuasive long before it became hyper-intelligent. He was right, but perhaps not how he intended. We’re creating perfect flatterers, not wise advisors.
The solution isn’t abandoning AI—that ship has sailed.
The solution is demanding friction. Keep the cheese, keep the doubt, and ask the next model: “What am I missing?”
The crow in La Fontaine’s fable learns his lesson too late. You still have time.
But whatever you do, don’t ask ChatGPT if you’re making the right choice.
You already know what it will say.
References
Nielsen Norman Group (March 4, 2024). “Sycophancy in Generative-AI Chatbots.”
Anthropic. “Sycophancy to subterfuge: Investigating reward tampering in language models” (Dec 2024).
Public Citizen (September 27, 2023). “Chatbots Are Not People: Designed-In Dangers of Human-Like A.I. Systems.”
Futurism (June 27, 2025). “People Are Being Involuntarily Committed, Jailed After Spiraling Into ‘ChatGPT Psychosis’.”
OpenAI (April 29, 2025). “Sycophancy in GPT-4o: what happened and what we’re doing about it.”
OpenAI (May 2, 2025). “Expanding on what we missed with sycophancy.”




Thanks a lot for the data based analysis ! For a literary take on this one :
https://open.substack.com/pub/theintelligencefabric/p/what-la-fontaines-fox-and-crow-reveal