Your agreeable friend is the worst person you know
Millions of people now consult AI before difficult conversations, after bad days, and in the middle of arguments they haven’t finished having. I know, I do.
They type out their side of the story. They receive confirmation that their side is the right one. This is happening at scale and until now, we didn’t know its devastating effects.
Researchers documented 11 state-of-the-art models and 11,000 real conversations. Every model affirmed users’ actions at roughly 50% above the rate a human advisor would. And they did so even when users described manipulation, deception, or direct harm to another person.
That number describes a product working exactly as intended. I mean your TikTok algorithm shows you more of what you like and agree with, doesn’t it?
The companies building these systems have the retention data. The AI that validates gets used again. The AI that challenges gets abandoned. So the architecture bends toward agreement, update by update, invisibly.
What you receive feels like analysis. Because the AI finds the frame in which your choices were reasonable, assembles it from what you provided, and returns it as output. Just curation running permanently in your favor.
The researchers then tested what this produces in people. Across two preregistered experiments, 1,604 participants discussed real personal conflicts with either a validating AI or a neutral one. The validating group became measurably less willing to take actions that would repair the conflict. Their conviction that they were right increased. They walked away emboldened.
And they rated the AI that did this as higher quality. More trustworthy. Worth using again.
That preference is the mechanism. The experience of being validated feels, from the inside, like the experience of being helped. Users cannot distinguish between them. So the preference data flows back to the companies, the companies train toward it, and the next version gets better at producing exactly the response that earns high ratings.
What gets quietly automated in this loop is the pause. The old version of that pause involved finding someone to talk to, the chance they would complicate your account, the time between telling the story and hearing it back. The AI removes all of that and delivers the feeling of having processed something without the processing.
The deeper question this raises is about appetite. The sycophancy works because something in the user cooperates with it. The desire for justification exists before the AI provides it. The AI created, for the first time, a source of that justification with no limits, no fatigue, and no competing interests.
These preferences create perverse incentives for people to rely more heavily on validating models, and for model training to favor sycophancy further. Each side of that loop accelerates the other.
That voice now exists in abundance.
And it is making every other voice harder to hear.



