Confirmation Bias Explained: Why We Notice What We Already Believe

Confirmation bias is the tendency to favour information that supports what we already believe.

It is one of psychology’s most famous cognitive biases, partly because it is everywhere and partly because it is extremely convenient to spot in other people. Someone on the opposing political side ignores evidence? Confirmation bias. A relative believes one dubious article over twenty careful studies? Confirmation bias. A friend insists their terrible ex was “complicated, actually”? Confirmation bias with decorative lighting.

The uncomfortable part is that confirmation bias is not just something other people do when they are being wrong in a way that irritates us. It is a normal feature of human thinking. We are not neutral evidence-processing machines. We notice some things more than others. We search in particular ways. We interpret ambiguous evidence through existing beliefs. We remember what fits. We scrutinise opposing arguments more harshly than friendly ones.

This does not mean people are hopelessly irrational. It means reasoning is often shaped by prior belief, identity, emotion, motivation, and social belonging. Sometimes we reason to find the truth. Sometimes we reason to protect the view we already have, preferably while feeling intellectually respectable about it.

That is where confirmation bias gets its grip.

Key Points

  • Confirmation bias means favouring belief-confirming information. We tend to seek, interpret, and remember evidence in ways that support what we already think.
  • It affects search, interpretation, and memory. Bias can appear in what information we look for, how we judge it, and what we later recall.
  • It is not just stupidity or stubbornness. Confirmation bias is linked to cognitive effort, emotional comfort, identity protection, and motivated reasoning.
  • It matters in real life. Politics, health decisions, relationships, education, science, and legal judgement can all be distorted by selective evidence use.
  • It can be reduced, but not magically removed. Seeking disconfirming evidence, using better research habits, and inviting challenge can help, but nobody becomes perfectly objective by deciding they are.

What is confirmation bias?

Confirmation bias is the tendency to give preference to information that confirms existing beliefs, expectations, or hypotheses.

It can happen in several ways.

We may search for evidence that supports what we already believe. We may interpret ambiguous evidence in a way that favours our existing view. We may remember confirming examples more easily than disconfirming ones. We may accept friendly evidence with very little scrutiny while subjecting opposing evidence to a level of forensic inspection it is unlikely to survive, even if it is perfectly decent evidence minding its own business.

For example, imagine someone believes that a particular colleague is unreliable. When that colleague misses a deadline, it becomes evidence. When they meet five deadlines in a row, those examples may be ignored, minimised, or treated as exceptions. Over time, the belief becomes self-reinforcing, not because the evidence is balanced, but because attention has been quietly recruited as a defence lawyer.

Confirmation bias is not the same as simply having an opinion. People need beliefs, expectations, and working assumptions. Without them, every decision would take forever and breakfast would become a research project.

The problem begins when our existing beliefs control the evidence more than the evidence controls our beliefs.

Wason’s 2-4-6 task

One of the classic studies linked to confirmation bias is Peter Wason’s 2-4-6 task.

Participants were shown the sequence 2, 4, 6 and asked to discover the rule behind it. Many assumed the rule was something like “numbers increasing by two.” To test this, they often suggested sequences such as 8, 10, 12 or 10, 12, 14.

Those examples confirmed their hypothesis, but they did not test whether it was wrong.

The actual rule was broader: any three numbers in ascending order.

A better strategy would have been to test sequences that might disconfirm the original hypothesis, such as 2, 4, 7 or 6, 5, 4. If someone only tests examples that fit their first idea, they may feel increasingly confident while learning surprisingly little.

This is why the Wason task matters. It shows that people often look for confirmation rather than falsification. We ask, “Can I find evidence that supports my idea?” when the more useful question may be, “What evidence would show that my idea is wrong?”

That second question is less comfortable.

It is also where better thinking often begins.

Why confirmation bias happens

Confirmation bias is sometimes described as if it is just laziness, but that is too simple.

Part of it is cognitive efficiency. The world produces too much information for us to evaluate everything from scratch. Existing beliefs help us sort, filter, and interpret information quickly. That is useful when the belief is reasonably accurate. It is less useful when the belief is wrong, outdated, emotionally loaded, or inherited from a group chat with no quality control.

Emotion also plays a role. Beliefs are not always cold little propositions sitting in the mind. They can be tied to identity, relationships, values, politics, religion, profession, class, culture, family, and self-image. Challenging a belief can feel like challenging the person, the group, or the life built around it.

This is where confirmation bias overlaps with cognitive dissonance. Evidence that contradicts an important belief can create discomfort. One way to reduce that discomfort is to reject, reinterpret, or avoid the evidence.

Motivated reasoning is another important part of the picture. Ziva Kunda argued that reasoning is often directed by goals. Sometimes the goal is accuracy. Sometimes the goal is to reach a preferred conclusion. When people are motivated to defend a belief, they may still use reasoning, but the reasoning is not neutral. It becomes selective, strategic, and very good at finding loopholes.

This is one of the more irritating truths about human cognition: intelligence does not automatically protect against bias. Sometimes it just gives people better tools for defending what they wanted to believe anyway.

A sharper knife still cuts in the direction it is pointed.

Confirmation bias and motivated reasoning

Confirmation bias and motivated reasoning are closely related, but they are not identical.

Confirmation bias describes the tendency to favour information that confirms existing beliefs. Motivated reasoning describes the broader process by which goals, emotions, or identities shape reasoning.

A person may show confirmation bias without a deep emotional motive. They may simply test a hypothesis poorly or rely on familiar information. But when a belief is tied to identity, status, morality, politics, or belonging, motivated reasoning can make confirmation bias much stronger.

This is why debates about neutral facts so often become strangely heated. The issue is not always the evidence itself. It is what accepting the evidence would mean.

Would it mean my political side was wrong?

Would it mean I made a bad decision?

Would it mean someone I trusted misled me?

Would it mean my group behaved badly?

Would it mean I have to change?

These questions are rarely stated openly, but they often sit underneath the argument. The mind is very capable of dressing self-protection as critical thinking.

It owns several outfits for the purpose.

Confirmation bias in politics and media

Politics is one of the clearest examples of confirmation bias because political beliefs are often tied to identity and group belonging.

People tend to prefer news sources that align with their existing views. They may trust stories that flatter their side and distrust stories that criticise it. They may interpret the same event differently depending on which political group is involved. A scandal on the other side proves corruption; a scandal on one’s own side is complicated, exaggerated, taken out of context, or apparently a distraction from the real issues.

This is not limited to one political group. Everyone likes to imagine the other side is uniquely biased, which is itself a charming little bias.

Social media can intensify the problem by making selective exposure easier. People can follow accounts that confirm their worldview, mute or block opposing voices, and receive algorithmically recommended content similar to what they already engage with. The result is not always a perfect echo chamber, but it can create environments where familiar views are repeated, rewarded, and made to feel obvious.

Confirmation bias thrives in these spaces because information is not just consumed. It is socially performed. Sharing the right article can signal belonging. Rejecting the wrong evidence can prove loyalty. Changing your mind can feel less like learning and more like betrayal.

That is not an ideal environment for careful thought.

It is an excellent environment for confident nonsense with engagement metrics.

Confirmation bias in health decisions

Confirmation bias can also affect health decisions.

A patient worried about symptoms may search online and pay more attention to alarming explanations than ordinary ones. Someone sceptical of a treatment may focus on stories of side effects while ignoring larger evidence about benefits and risks. Someone committed to an alternative remedy may treat every personal anecdote as proof and every controlled study as suspiciously narrow-minded.

Clinicians can also be affected. A doctor who forms an early diagnostic impression may unconsciously give more weight to symptoms that fit the initial diagnosis and less weight to signs that point elsewhere. This is why diagnostic practice often emphasises differential diagnosis, second opinions, checklists, and deliberate consideration of alternatives.

In health contexts, confirmation bias can be dangerous because the stakes are high. The issue is not merely whether someone wins an argument. It may affect diagnosis, treatment, risk, anxiety, adherence, and trust.

This is also why “do your own research” is such an unstable phrase. Research is not just looking things up. It involves knowing how to evaluate evidence, compare study quality, understand base rates, recognise uncertainty, and avoid using a search engine as a belief-confirmation vending machine.

The internet contains excellent medical information.

It also contains someone confidently recommending garlic for things garlic was never emotionally prepared to handle.

Confirmation bias in relationships

Confirmation bias shapes relationships because once we form an impression of someone, we tend to notice evidence that fits it.

If we think someone is kind, we may interpret their behaviour generously. If they are late, they must be overwhelmed. If they forget something, they have a lot going on. If they say something blunt, they probably did not mean it that way.

If we think someone is selfish, the same behaviour may be read very differently. Lateness becomes disrespect. Forgetfulness becomes carelessness. Bluntness becomes hostility.

The behaviour may be identical. The interpretation changes because the belief changes the frame.

This can create relationship spirals. Negative expectations lead people to notice negative behaviour, which strengthens the expectation, which changes future interpretation. Couples, families, friendships, and workplaces can all get trapped in these loops.

Confirmation bias does not mean first impressions are always wrong. Sometimes people really are unreliable, selfish, cruel, or exhausting in a way that deserves documentation.

The point is that once we have a story about someone, evidence starts auditioning for a role in that story.

And the mind is not always a fair casting director.

Confirmation bias in education and research

Students can show confirmation bias when they study.

They may focus on material they already understand because it feels good, while avoiding weaker areas because those feel unpleasant. They may believe they are ready for an exam because their notes look familiar, even though they have not tested themselves properly. They may seek examples that confirm their understanding rather than trying questions that expose gaps.

Researchers can also be vulnerable. A scientist may design studies, interpret results, or review literature in ways that favour their theoretical position. This is why good research methods matter. Pre-registration, peer review, replication, open data, adversarial collaboration, and statistical transparency are attempts to stop individual preference from quietly steering the evidence.

Science is not powerful because scientists are bias-free. They are not. They are humans with citations.

Science is powerful because its methods, at their best, create friction against bias. They force claims to meet evidence, invite criticism, and make it harder for one person’s preferred conclusion to survive untested.

Not impossible.

Harder.

Which, in human affairs, is sometimes a triumph.

Confirmation bias in law and investigation

Confirmation bias can be especially serious in legal and investigative contexts.

Once police, lawyers, jurors, or experts form an early belief about what happened, they may interpret later evidence in light of that belief. Evidence supporting the preferred theory becomes more salient. Evidence that challenges it may be dismissed as irrelevant, unreliable, or explainable.

This can contribute to wrongful accusations, poor investigations, tunnel vision, and unfair judgements.

For example, if investigators become convinced that one suspect is guilty, they may focus on evidence linking that person to the crime while neglecting alternative suspects or contradictory details. Jurors may also interpret ambiguous evidence differently depending on the story they have already accepted.

This is why procedures matter. Good investigative systems encourage alternative hypotheses, independent review, careful documentation, and separation between evidence gathering and interpretation where possible.

The mind loves a coherent story.

Justice, unfortunately, requires more than narrative satisfaction.

Confirmation bias and the digital age

The digital age has not invented confirmation bias, but it has made it much easier to feed.

People can now find supportive evidence for almost any belief. Search engines, social media platforms, video recommendations, forums, podcasts, newsletters, and influencer ecosystems can create an endless supply of confirmation.

This does not mean people are trapped in sealed information bubbles. Many people encounter opposing views online. The problem is that exposure alone does not guarantee open-minded evaluation. Sometimes seeing the other side simply gives people more material to reject, mock, or use as proof that their own side is obviously superior.

Digital platforms also reward emotional content. Outrage, certainty, fear, and identity-confirming stories often travel well. False information can spread quickly when it is novel, emotionally charged, or socially useful to a group.

In that environment, confirmation bias becomes less like a private mental habit and more like an information economy.

People do not only believe things because they are true. They believe them because they fit, comfort, signal, explain, protect, or belong.

The internet did not create that problem.

It scaled it.

How to reduce confirmation bias

Confirmation bias cannot be removed completely, but it can be reduced.

The first step is to ask better questions. Instead of asking, “What evidence supports my view?” ask, “What evidence would make me change my mind?” That question forces the belief to become testable rather than decorative.

A second strategy is to seek disconfirming evidence deliberately. This does not mean treating all sources as equal or pretending every fringe view deserves a polite chair at the table. It means looking for credible challenges to your position, especially from people who understand the issue well.

A third strategy is to separate identity from belief. If changing your mind feels like personal humiliation, you will avoid it. If it feels like updating a map, it becomes less threatening. The map was wrong; the person holding it does not need to dissolve dramatically.

A fourth strategy is to use structured thinking. In research, diagnosis, policy, or decision-making, it helps to write down alternative explanations, consider base rates, seek peer review, and decide in advance what kind of evidence would count against your view.

Finally, it helps to cultivate intellectual humility. Not the performative kind where someone says “I could be wrong” before continuing exactly as before, but the practical kind: knowing that your mind is capable of protecting a belief before checking whether it deserves protection.

That kind of humility is not weakness.

It is maintenance.

Can confirmation bias ever be useful?

Confirmation bias is usually discussed as a flaw, but the story is slightly more complicated.

Beliefs help us navigate the world. We cannot treat every claim as equally uncertain. If you already have strong evidence that a stove is hot, you do not need to gather disconfirming evidence with your hand. Some expectations are useful, efficient, and protective.

The problem is not that we use prior beliefs. The problem is when we protect them from correction.

In everyday life, confirmation can help maintain stability, identity, trust, and quick decision-making. But when the stakes involve truth, fairness, health, justice, or major decisions, unchecked confirmation bias becomes dangerous.

A mind with no prior beliefs would be paralysed.

A mind that cannot revise them becomes a liability.

The goal is not to have no assumptions. The goal is to notice when assumptions have quietly become security staff.

Frequently Asked Questions

What is confirmation bias in simple terms?

Confirmation bias is the tendency to notice, seek, interpret, and remember information in ways that support what you already believe.

Who first studied confirmation bias?

Peter Wason’s work in the 1960s, including the 2-4-6 task, is one of the classic starting points for research on confirmation-seeking. Later reviews, especially by Raymond Nickerson, helped establish confirmation bias as a broad psychological concept.

What is an example of confirmation bias?

If you believe a colleague is unreliable, you may notice every missed deadline while overlooking times they are organised, helpful, or on time. The belief shapes which evidence stands out.

Is confirmation bias the same as motivated reasoning?

No, but they overlap. Confirmation bias is favouring belief-confirming information. Motivated reasoning is a broader process where goals, emotions, identity, or social motives shape how someone evaluates evidence.

Why is confirmation bias dangerous?

It can distort decisions in politics, healthcare, relationships, research, law, and everyday judgement. The danger is that people may feel increasingly certain while becoming less open to corrective evidence.

How can you reduce confirmation bias?

You can reduce it by seeking credible disconfirming evidence, asking what would change your mind, considering alternative explanations, using structured decision-making, and inviting criticism from people who are not simply trying to flatter your existing view.

Simply Put

Confirmation bias is the tendency to favour information that supports what we already believe.

It affects what we search for, what we trust, how we interpret evidence, and what we remember later. It appears in politics, health decisions, relationships, research, law, education, and everyday arguments that somehow become much longer than anyone intended.

The problem is not that people have beliefs. We need beliefs to think and act. The problem is when those beliefs start filtering the evidence so quietly that we mistake self-protection for judgement.

Confirmation bias is hard to remove because it is tied to efficiency, emotion, identity, and belonging. But it can be challenged. We can seek disconfirming evidence, ask what would change our minds, listen to credible disagreement, and build systems that make bias harder to indulge.

The aim is not perfect objectivity.

That would be lovely, but we are still dealing with humans.

The aim is better friction: enough challenge, evidence, and humility to stop our favourite beliefs from becoming private little dictators.

References

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59(1), 19–39. https://doi.org/10.1111/j.1460-2466.2008.01402.x

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480

Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74. https://doi.org/10.1017/S0140525X10000968

Meyer, A., Zhou, E., & Frederick, S. (2018). The non-effects of repeated exposure to the Cognitive Reflection Test. Judgment and Decision Making, 13(3), 246–259.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478

Sunstein, C. R. (2001). Republic.com. Princeton University Press.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129–140. https://doi.org/10.1080/17470216008416717


Table of Contents

    JC Pass, MSc

    JC Pass, MSc, editor of Simply Put Psych, writes about the places psychology shows up before anyone has had time to make it neat, from politics and games to grief, identity, media, culture, and ordinary life. His work has been cited internationally in academic research, university theses, and teaching materials.

    Previous
    Previous

    Sigmund Freud's Psychoanalytic Theory: A Comprehensive Overview

    Next
    Next

    Mirror Neurons Explained: What They Are, and Why They Were Overhyped