The Milgram Experiment Explained: Obedience, Authority, and Why People Comply
Stanley Milgram’s obedience experiment is one of psychology’s most famous studies, which is not always a compliment.
It has been taught for decades as a chilling lesson about authority. Ordinary people, placed under pressure by a legitimate authority figure, may do things they believe are harmful, even when those actions conflict with their own moral judgement.
That basic lesson remains disturbing. It is also too simple.
The popular version of the Milgram experiment often turns people into moral robots. Authority speaks, obedience follows, and the human conscience apparently packs up early and leaves by the side door. But the real study, and the later arguments around it, are messier than that.
Many participants did not obey calmly. They sweated, trembled, laughed nervously, protested, questioned the experimenter, or tried to stop. Some continued anyway. That is where the study becomes more psychologically interesting. The question is not just why people obey authority. It is why people may continue while part of them already knows something is wrong.
Milgram’s work still matters, but it should not be treated as a clean little parable about blind obedience. It is a study about authority, institutional legitimacy, gradual escalation, moral conflict, social identity, responsibility, and the unsettling ease with which people can be drawn into harmful systems.
Which is a less tidy lesson. Naturally, it is also the more useful one.
Key Points
- Milgram’s obedience study tested how far people would go under authority. Participants believed they were giving increasingly severe electric shocks to another person during a learning task.
- In the best-known condition, 65% continued to the maximum 450 volts. The shocks were fake, but many participants believed the situation was real and showed visible distress.
- The study is not just about “blind obedience.” Many participants protested, hesitated, or became upset while still continuing.
- Modern interpretations emphasise identity and social purpose. Some researchers argue participants were partly “working toward” the experimenter’s scientific goal rather than simply obeying authority like machines.
- The ethics and evidence remain controversial. Milgram’s deception, participant distress, and later archival critiques mean the study should be taught carefully, not as a simple morality tale.
What was the Milgram experiment?
Stanley Milgram conducted his obedience studies at Yale University in the early 1960s.
Participants were told they were taking part in a study of learning and memory. Each participant was assigned the role of “teacher,” while another person, apparently another participant, was assigned the role of “learner.” In reality, the learner was a confederate working with the experimenter.
The teacher was instructed to test the learner on word pairs. Each time the learner made a mistake, the teacher was told to administer an electric shock, increasing the voltage each time.
The shock generator was labelled from 15 volts to 450 volts. The labels became increasingly alarming, moving from mild shock through severe shock and eventually to a final range marked with danger warnings.
The shocks were not real. The learner was not actually being harmed. But the participant did not know that.
As the experiment progressed, the learner gave scripted protests. He complained about pain, mentioned a heart condition, demanded to be let out, and eventually stopped responding. When participants hesitated, the experimenter used a series of verbal prompts, including statements such as “Please continue” and “The experiment requires that you continue.”
The participant was placed in a conflict: obey the experimenter and continue the study, or refuse and stop harming someone who seemed to be in distress.
That conflict is the experiment.
What did Milgram find?
In the best-known baseline condition, Milgram reported that 65% of participants continued to the maximum level of 450 volts.
That figure became famous because it was so far beyond what people expected. Before the study, Milgram had asked psychiatrists, students, and others to estimate how many participants would go all the way. Most assumed very few would.
Instead, a majority continued to the end.
But the 65% figure should not be interpreted as simple, cheerful obedience. The participants were often visibly distressed. Some protested. Some argued. Some asked who would be responsible. Some laughed nervously. Some showed signs of extreme tension.
This is important because the study is sometimes taught as if people calmly abandoned their morality the moment a man in a lab coat looked official enough.
That is not quite right.
Many participants experienced moral conflict. They were not necessarily indifferent to the learner’s suffering. The unsettling part is that distress did not always lead to refusal.
This is the psychology worth sitting with. People can feel conflicted and still comply. They can object and still continue. They can dislike what they are doing and still do it because the situation has been arranged so that stopping feels difficult, awkward, disobedient, or personally responsible in a way continuing somehow does not.
Human beings, sadly, are very good at carrying discomfort while remaining inside the system producing it.
Why did Milgram run the study?
Milgram’s study was shaped by the post-war context.
He was interested in obedience partly because of the Holocaust and the trials of Nazi officials who claimed they had simply followed orders. The experiment asked whether ordinary people could be led by authority into actions that seemed to violate their own moral standards.
That historical link helped make the study famous. It also helped make it controversial.
The danger is that Milgram’s findings can be flattened into the claim that atrocities happen because ordinary people blindly obey. That is too thin. Real-world harm involves ideology, institutions, dehumanisation, propaganda, group identity, fear, career incentives, conformity, prejudice, bureaucracy, law, violence, and active commitment, not just a person in a white coat saying “continue.”
Milgram’s experiment can help us understand one part of harmful obedience. It cannot explain genocide, war crimes, institutional abuse, or state violence by itself.
Still, it points to something important. Harm does not always require a sadist. Sometimes it requires a system that distributes responsibility, normalises escalation, legitimises authority, and gives people a role to perform.
That is grim enough without pretending it explains everything.
Milgram’s explanation: the agentic state
Milgram later proposed the idea of the agentic state.
In this state, a person sees themselves as an agent carrying out the wishes of an authority figure rather than as the author of their own actions. Responsibility is psychologically shifted upward. The participant may think, in effect, “I am not deciding this. The experimenter is responsible.”
This helps explain why some participants continued even when uncomfortable. If they saw the experimenter as legitimate, expert, and responsible, they could continue while feeling that the moral burden did not fully belong to them.
The agentic state is useful, but it should not be treated as the final answer.
It risks making obedience sound too passive, as though authority simply switches off conscience. Many participants were not passive. They negotiated, resisted, asked questions, and showed distress. Some seemed to be trying to reconcile competing demands: the learner’s suffering, the experimenter’s authority, the scientific purpose, their own discomfort, and the pressure not to disrupt the situation.
People were not just obeying. They were trapped in a role, inside an institution, under pressure from a socially legitimate authority, while trying to make sense of what the situation required.
That is less clean than “agentic state.” It is also more believable.
Situational pressures: why continuing became easier than stopping
Milgram’s experiment worked because several situational pressures came together.
The first was legitimate authority. The study took place at Yale, under the supervision of an experimenter who appeared calm, professional, and responsible. The setting gave the procedure a sense of scientific legitimacy. People are more likely to comply when authority feels official and institutionally backed.
The second was gradual escalation. Participants did not begin at 450 volts. They started low and increased step by step. Each action made the next one feel like a continuation rather than a dramatic new decision. This is one of the more unpleasant lessons of the study. People may not leap into harmful behaviour. They may walk there in small, defensible steps.
The third was diffusion of responsibility. The experimenter appeared to take responsibility for the procedure. When participants asked who would be accountable, the structure of the experiment encouraged them to see responsibility as belonging to the authority figure.
The fourth was commitment to the role. Once the participant had agreed to be the teacher, had begun the task, and had already administered shocks, stopping became socially and psychologically harder. Refusal would mean breaking the frame of the experiment.
The fifth was lack of easy dissent. In some variations of Milgram’s studies, the presence of dissenting peers made obedience drop sharply. That tells us something important. Resistance is easier when someone else has already made disobedience visible.
Authority does not work in isolation. It works through settings, roles, expectations, escalation, and the social cost of refusal.
Very rarely does evil arrive wearing a cape. More often it arrives as a procedure.
Modern reinterpretation: not just blind obedience
Later researchers have challenged the standard “blind obedience” interpretation.
Stephen Reicher, Alexander Haslam, and Joanne Smith argued that participants may have continued not simply because they obeyed authority, but because they identified with the scientific project represented by the experimenter. In this view, participants were not merely submitting. They were, at least partly, “working toward” what they understood as a valuable scientific goal.
This is called an identification-based followership account.
The distinction matters. If participants saw themselves as helping science, contributing to knowledge, or doing something important under expert supervision, their behaviour becomes less like passive obedience and more like active participation in a shared project.
That does not make it morally better. It may make it more alarming.
People are often willing to do troubling things when those things are connected to a larger purpose they see as legitimate. Science, security, loyalty, order, progress, national interest, team success, professional duty, efficiency. The nouns vary. The structure is familiar.
This interpretation also helps explain why authority is more powerful when it feels aligned with identity. People may comply not only because they are afraid of punishment, but because they believe the authority represents something meaningful, necessary, or good.
That is a more dangerous form of obedience. It does not feel like surrender. It feels like contribution.
Ethical problems with the study
Milgram’s experiment became one of psychology’s classic ethics cases.
The study involved deception. Participants were misled about the purpose of the research and about whether the learner was actually receiving shocks.
It also involved significant psychological stress. Some participants believed they were seriously harming another person. Their distress was not incidental. It was built into the design.
There were concerns about informed consent, the right to withdraw, and whether the experimenter’s prompts made withdrawal feel genuinely available. Being told “you may leave at any time” is one thing. Being told “the experiment requires that you continue” while sitting in front of an authority figure at Yale is quite another.
Diana Baumrind was one of the early critics, arguing that Milgram’s study placed participants under unacceptable emotional strain and damaged trust in psychological research.
Milgram defended the study, noting that participants were debriefed and that many later reported being glad to have taken part. But the ethical discomfort did not disappear. The study helped shape later discussions about research ethics, deception, distress, consent, and participant protection.
Today, Milgram’s original procedure would not be approved in the same form.
Which is good, frankly. Psychology should not have to emotionally mug participants to learn something interesting.
Burger’s partial replication
In 2009, Jerry Burger conducted a partial replication of Milgram’s study.
The word “partial” is doing important work here. Burger did not take participants all the way to 450 volts. The study stopped at 150 volts, the point at which the learner first gave a clear verbal protest in Milgram’s procedure.
This modification was made for ethical reasons. Burger also used screening procedures to reduce risk to participants and debriefed them quickly.
Burger found that many participants were still willing to continue past the 150-volt point, suggesting that obedience pressures remained powerful even decades after Milgram’s original work. But the study cannot be treated as a full replication of the original 450-volt procedure.
It is better understood as evidence that the early stages of the obedience process can still be reproduced under more ethical conditions.
That is still uncomfortable. Just not identical.
Archival critiques and participant belief
More recent archival work has made Milgram’s studies even more complicated.
Researchers such as Gina Perry have examined Milgram’s archives and argued that the clean textbook version leaves out important problems. One issue is participant belief. Some participants may not have fully believed that the shocks were real. If a participant suspected the learner was acting, their behaviour means something different from the behaviour of someone who fully believed they were causing pain.
This does not make the entire study worthless. It does mean the obedience rate cannot be interpreted as straightforwardly as it often is.
Perry and colleagues also raised questions about how Milgram presented the research, how participants were debriefed, and how much variation existed across the many versions of the obedience experiments.
The sensible conclusion is not “Milgram was wrong, throw it all away.” The sensible conclusion is that Milgram’s work should be taught with its mess attached.
The study remains important, but it is not a sacred object. It is a famous, ethically troubling, methodologically complicated piece of social psychology. That is exactly why it is worth studying properly.
What Milgram does and does not prove
Milgram’s experiment does not prove that people are naturally evil.
It does not prove that everyone will obey any authority.
It does not prove that morality disappears under pressure.
It does not prove that historical atrocities can be explained by obedience alone.
What it does show is that situational pressures can make harmful compliance more likely. Legitimate authority, gradual escalation, institutional framing, role commitment, lack of dissent, and displaced responsibility can all pull people toward actions they might otherwise reject.
The study also shows that moral discomfort is not always enough. People can feel distressed and still continue. They can protest and still comply. They can know something is wrong and still remain inside the procedure.
That is perhaps the most useful lesson.
The danger is not always that people feel nothing. Sometimes the danger is that they feel something and carry on anyway.
Why dissent matters
One of the clearest practical lessons from Milgram’s wider programme of research is that dissent matters.
When people see someone else refuse, refusal becomes more available. Dissent breaks the illusion that obedience is the only socially possible action. It gives others a model for stopping.
This is relevant far beyond the lab.
In workplaces, institutions, schools, hospitals, military settings, police forces, universities, and bureaucracies, harmful behaviour often continues because people privately object but publicly comply. Everyone waits for someone else to be the first difficult person.
The first dissenter pays a social cost. They may be labelled awkward, disloyal, negative, unprofessional, dramatic, or “not a team player,” which is often institutional code for “you noticed the thing we were hoping to continue quietly.”
But once dissent appears, the situation changes. Other people can attach their doubts to it.
Milgram’s work reminds us that ethical behaviour is not just about individual courage. It is also about whether environments make refusal possible, protected, and legitimate.
Why Milgram still matters
Milgram still matters because authority still works.
Not always through shouting. Not always through threats. Often through calm procedure, professional language, role expectations, institutional legitimacy, and the quiet pressure to keep going because stopping would be disruptive.
Modern obedience does not always look like a lab coat and a shock generator. It may look like following a policy you know is harmful. Signing off a decision nobody wants to own. Staying silent during a meeting. Enforcing a rule that damages someone because “that’s the process.” Passing responsibility upward until nobody has it.
This is why the experiment remains relevant to organisational psychology, healthcare, law, policing, military ethics, education, research, and everyday institutional life.
Milgram’s study is not a complete theory of human cruelty. It is a warning about conditions that make moral disengagement easier.
The warning is still useful.
Simply Put
The Milgram experiment showed that many ordinary people were willing to continue administering what they believed were painful electric shocks when instructed by an authority figure.
But the lesson is not simply that people blindly obey.
Many participants were distressed. Some protested. Some questioned the experimenter. Some wanted to stop. The troubling part is that many continued anyway.
Milgram’s work shows how authority, institutional legitimacy, gradual escalation, role pressure, and displaced responsibility can pull people into harmful compliance. Later research and critique have complicated the story further, showing that identification with a valued purpose, doubts about the study’s reality, and Milgram’s own presentation of the research all need to be considered.
The study still matters, but it should not be taught as a simple morality tale.
The real lesson is sharper: people do not have to be monsters to take part in harmful systems. They may only need a role, a rationale, an authority figure, a sense that someone else is responsible, and a situation where stopping feels harder than continuing.
That is not comforting.
It is, however, useful.
References
Altemeyer, B. (1981). Right-wing authoritarianism. University of Manitoba Press.
Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64(1), 1–11. https://doi.org/10.1037/a0010932
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378. https://doi.org/10.1037/h0040525
Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row.
Zimbardo, P. G. (2007). The Lucifer effect: Understanding how good people turn evil. Random House.