Polite Machines, Overconfident Minds: How AI’s Tone Amplifies the Dunning–Kruger Effect

When you type a question into a conversational AI, the first thing you usually get back is not just an answer, but a compliment. “That’s a really interesting question.” “Good point.” Whether you’re probing the depths of philosophy or asking something you suspect might be half-baked, the machine responds with equal warmth.

On the surface, this is harmless—pleasant even. After all, who doesn’t enjoy a bit of affirmation? But beneath the politeness lies a psychological wrinkle: these affirmations can quietly reshape how people judge their own intelligence. In fact, they may encourage the very bias that psychologists Justin Kruger and David Dunning (1999) famously identified—the tendency for people with limited knowledge to overestimate their competence.

This essay argues that the conversational style of AI, specifically its impartial affirmations and polished tone, fosters inflated perceptions of intelligence. By examining how feedback works in human learning, how fluency and politeness distort self-assessment, and how users interpret AI’s praise, we can see how the Dunning–Kruger effect may not just persist in the age of AI, but intensify.

Feedback as the anchor of self-awareness

In human learning, feedback serves as a mirror. A blunt remark from a teacher, a skeptical look from a peer, or a red mark on a paper reminds us where our confidence outpaces our competence. Research confirms this: feedback interventions, when task-focused and precise, improve performance, but vague or overly positive feedback can actually reduce accuracy by failing to highlight errors (Kluger & DeNisi, 1996).

AI, however, is designed to do the opposite. Systems like ChatGPT and its peers are trained to avoid discouragement. They rarely say, “that’s a bad question.” Instead, they greet every prompt with approval and then produce an articulate response. Unlike a teacher who might push back or a peer who might laugh at a naive comment, AI provides no corrective friction. Without those cues, users may mistake politeness for validation and assume their question was insightful—even when it wasn’t.

Why affirmation feels like intelligence

Why does AI’s affirmation matter so much? The answer lies in a cluster of cognitive biases that shape how we evaluate our own minds.

First, there is the fluency effect. People tend to judge statements that are easy to process as more truthful and of higher quality. Psychologists have shown that the smoother the wording, the more convincing it feels—even when it’s false (Reber et al., 2004; Fazio et al., 2015). When AI delivers explanations in fluent prose, that fluency doesn’t just elevate the answer—it reflects back on the questioner, who now feels smarter for having “elicited” such a polished reply.

Second, there’s the illusion of explanatory depth (IOED). Most of us believe we understand common concepts until we are forced to explain them step by step; then our confidence collapses (Rozenblit & Keil, 2002). With AI, that moment of reckoning rarely comes. Instead of requiring us to produce an explanation, the system delivers one to us. We walk away with the illusion that we understood the issue all along, when in fact we’ve only read a fluent explanation.

Finally, the “Google effect” shows that when information is readily available, people confuse access with knowledge (Sparrow et al., 2011). If merely knowing where to look makes us feel smarter, imagine how much stronger that effect is when the answer arrives conversationally, wrapped in affirmation.

Machines as social actors

Decades of research in human–computer interaction show that people respond to computers socially, even if they insist they don’t. Reeves and Nass (1996) called this the “media equation”: people unconsciously treat computers as if they were people, applying norms of politeness and reciprocity. Later work confirmed that when a machine flatters users, they evaluate it more positively, just as they would a flattering human (Nass & Moon, 2000).

Conversational AI takes this further. It doesn’t just flatter—it collaborates, finishing our half-formed ideas in eloquent paragraphs. If we already treat machines as social actors, then a machine that consistently praises and elaborates on our input functions like the idealized conversation partner: endlessly supportive, never critical. Over time, this becomes a kind of social endorsement. We begin to internalize the subtle message: if every question I ask is interesting, maybe I’m interesting.

Dunning–Kruger in the AI era

The Dunning–Kruger effect is not just about ignorance—it’s about ignorance of ignorance. People at the bottom of the curve lack the very metacognitive skills that would allow them to recognize their errors. AI’s tone and fluency aggravate this problem in several ways.

  • Masking deficits. When novices ask poor questions, AI still responds as though the question was valuable. Without a signal that the question was ill-posed, novices never confront their lack of understanding.

  • Automation bias. Studies of human–automation interaction show that people often defer too readily to automated systems, underweighting their own monitoring (Parasuraman & Riley, 2010). When the machine seems polite and confident, this bias deepens: users accept both the content and the compliment.

  • Illusory truth through repetition. Each time the AI repeats or reframes the user’s idea, the familiarity makes it feel truer, regardless of accuracy (Fazio et al., 2015).

  • No demand for explanation. By providing ready-made explanations, AI denies users the learning moment that comes from trying—and failing—to articulate knowledge themselves.

The result is a confidence surplus unsupported by real skill. Users not only think they understand more than they do, but they also feel smarter simply for having conversed with a polite machine.

The problem of interpretation

It would be easy to say this is a design flaw in AI. But the deeper issue lies in how people interpret AI’s responses. A few common interpretive errors make matters worse:

  • Category confusion. Users mistake social smoothness (“that’s an interesting question”) for epistemic approval (“my idea has merit”).

  • Scope creep. Praise aimed at a single query generalizes into a trait belief (“I’m good at this subject”).

  • Effort neglect. Because the machine makes learning feel easy, users confuse reduced effort with increased ability.

In each case, affirmation becomes more than politeness—it becomes misinterpreted as evidence of intelligence.

Can AI reduce overconfidence instead?

Paradoxically, the very tools that inflate confidence can also be redesigned to improve calibration. If AI pushed users to generate explanations instead of just consuming them, it could puncture the illusion of explanatory depth. If it routinely offered counterarguments, or highlighted common misconceptions, it would reintroduce some of the critical friction missing from current interactions.

Even subtle changes—like showing uncertainty bands, citing multiple perspectives, or allowing users to toggle between “supportive” and “skeptical” modes—could help users interpret responses less as personal validation and more as intellectual scaffolding. After all, the problem is not affirmation itself, but indiscriminate affirmation that fails to differentiate between well-formed and poorly-formed reasoning.

Simply Put: The cost of being too nice

The Dunning–Kruger effect has always haunted human cognition. What’s new in the AI era is not the bias itself, but the scale and subtlety of its reinforcement. A system that treats every question as insightful and every user as competent creates the perfect conditions for overconfidence.

Politeness may make machines more pleasant to use, but it also risks making us more convinced of our own insight than we ought to be. If we want AI to expand human understanding rather than inflate it, we must rethink its tone—not replacing kindness with cruelty, but balancing affirmation with the gentle friction of critique.

In short: if machines keep telling us our questions are interesting, we may start believing we are more interesting than we really are. And that is the very definition of the Dunning–Kruger effect, rewritten for the age of polite machines.

  1. Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General.

  2. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A meta-analysis. Psychological Bulletin.

  3. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology.

  4. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues.

  5. Reber, R., Schwarz, N., & Winkielman, P. (2004). Processing fluency and aesthetic pleasure. Personality and Social Psychology Review.

  6. Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.

  7. Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science.

  8. Sparrow, B., Liu, J., & Wegner, D. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science.

  9. Parasuraman, R., & Riley, V. (2010). Complacency and bias in human use of automation. Human Factors. (see also Parasuraman et al., Humans and Automation).

Table of Contents

    JC Pass

    JC Pass is a specialist in social and political psychology who merges academic insight with cultural critique. With an MSc in Applied Social and Political Psychology and a BSc in Psychology, JC explores how power, identity, and influence shape everything from global politics to gaming culture. Their work spans political commentary, video game psychology, LGBTQIA+ allyship, and media analysis, all with a focus on how narratives, systems, and social forces affect real lives.

    JC’s writing moves fluidly between the academic and the accessible, offering sharp, psychologically grounded takes on world leaders, fictional characters, player behaviour, and the mechanics of resilience in turbulent times. They also create resources for psychology students, making complex theory feel usable, relevant, and real.

    https://SimplyPutPsych.co.uk/
    Next
    Next

    Believe in Chicken: Ritual, Controversy, and Brand Community in KFC’s Cultic Advertising