When No One Gets Hurt: Ethics, Psychology, and the Age of Simulation
In the shadowed corners of psychology’s past lie some of the most ethically troubling experiments ever conducted. Stanley Milgram’s obedience studies coerced participants into believing they were inflicting life-threatening shocks on others. Philip Zimbardo’s Stanford Prison Experiment rapidly devolved into psychological torment as students role-played guards and prisoners. And in perhaps the most unsettling of all, John Watson conditioned a young child "Little Albert" to fear harmless animals, leaving psychological scars without consent or regard for well-being. These infamous cases triggered widespread public backlash and led to the creation of ethical frameworks designed to protect human subjects from harm, deception, and exploitation.
But as psychological research increasingly shifts into virtual and computational domains, a new dilemma emerges: do the same ethical boundaries apply when no real people are involved? In simulated experiments where agents are code, not conscious beings, researchers can replicate or even surpass the extremity of past unethical studies, without technically violating any rights. When suffering is only virtual, does morality still constrain the experimenter?
This essay argues that while simulations eliminate the immediate risk of harm to human subjects, they are not ethically neutral. Abandoning oversight in this space risks undermining public trust, degrading researchers’ moral sensibilities, and potentially enabling systems and policies built on manipulative or dehumanizing models. As we enter the age of simulation, psychology faces a profound test: whether it can preserve its ethical integrity even when no one gets hurt.
Historical Context: Ethics in Psychology
The field of psychology has long wrestled with the tension between scientific discovery and ethical responsibility. In the early-to-mid 20th century, groundbreaking experiments revealed powerful insights into human behavior—but often at the cost of significant psychological harm.
John B. Watson’s Little Albert experiment (1920) marked a chilling moment in behaviorist history. Watson conditioned a baby to develop a fear of a white rat by pairing it with loud, frightening noises. The experiment, conducted without proper parental consent or concern for long-term psychological effects, demonstrated the power of classical conditioning—but left unresolved trauma in its wake.
Decades later, Stanley Milgram's obedience studies (1961) asked participants to administer what they believed were painful electric shocks to another person. Though no actual harm occurred, the emotional stress and deception used in the experiment led many participants to believe they had seriously hurt or even killed someone—all in the name of understanding authority and compliance.
In 1971, Philip Zimbardo’s Stanford Prison Experiment took ethical concerns even further. Volunteers assigned to play prison guards began psychologically abusing their “prisoner” peers, leading to such distress that the study had to be terminated prematurely. The experiment lacked informed consent about potential psychological harm and failed to protect participants from severe emotional consequences.
Public and academic reaction to these studies was swift. By the mid-1970s, institutional review boards (IRBs), codes of conduct, and ethics training became foundational to psychological research. The Belmont Report (1979) and guidelines from the American Psychological Association enshrined principles such as informed consent, beneficence, respect for persons, and the right to withdraw.
These reforms drew a clear line: scientific value could not justify the mistreatment of human subjects. But today, as researchers turn to simulated agents and virtual environments, a new gray area is emerging—one where the traditional rules may no longer apply.
The Simulation Revolution
Psychology is undergoing a transformation. With the rise of advanced computing, artificial intelligence, and increasingly realistic virtual environments, researchers can now model human behaviour in silico rather than in vivo. What was once confined to laboratory settings with human volunteers is now possible inside computer simulations, where thousands or even millions of artificial agents can mimic decision-making, emotions, social dynamics, and learning over time.
These simulated agents range from basic rule-based models to sophisticated AI systems that can adapt and evolve. Virtual environments allow researchers to replicate real-world social scenarios, from classrooms and workplaces to war zones and authoritarian regimes. Simulations can run continuously, adjust variables in real time, and test outcomes across vast experimental conditions with unmatched precision.
Crucially, no real people are involved. There are no subjects to deceive, stress, or traumatize. Simulated agents do not require consent. They do not file complaints. They do not suffer—at least not in any traditional sense. This shift introduces a new sense of freedom into psychological research. The ethical restrictions that once limited the scope of inquiry are suddenly optional. The kinds of studies that would never pass an ethics board today, such as simulated torture, manipulation, or indoctrination, become technically feasible in a virtual context.
For some researchers, this opens the door to profound scientific opportunity. Simulations can illuminate complex systems of behavior, uncover hidden dynamics, and forecast responses to policies or crises. But the very fact that these experiments are unbounded by current ethical norms raises a deeper question: Just because we can simulate anything, should we?
Ethical Gray Zones in Simulations
At first glance, the ethical appeal of simulated psychological experiments is straightforward. No real people are involved, so there is no risk of physical or emotional harm. Researchers can explore extreme scenarios—indoctrination, coercion, abuse, even genocide—without violating anyone's rights or well-being. This freedom allows science to push boundaries that would otherwise be off-limits. In theory, it unlocks a new era of responsible yet unrestricted discovery.
By recreating morally challenging environments in code, psychologists can study human-like behaviour under conditions that are impossible, illegal, or unethical to replicate in the real world. The benefits are evident: dangerous ideologies, social collapses, or authoritarian systems can be explored without consequences. Researchers can tweak every variable, run infinite versions of the same experiment, and analyse patterns that would take decades to uncover through human-based trials.
However, this new freedom introduces a host of ethical gray zones. One concern is desensitization. When researchers frequently simulate harm, cruelty, or manipulation, even on artificial agents, they may become emotionally detached from the real-world implications of such behaviour. Over time, this could erode empathy or reinforce instrumental thinking—where outcomes justify any means, as long as no one “real” suffers.
Another concern lies in indirect consequences. Insights from simulated manipulation could be used to train persuasive AI, develop intrusive behavioral interventions, or shape policy tools that influence people without their knowledge or consent. While the simulation itself may be harmless, its applications could pose serious ethical risks in society.
As simulations become more complex, another troubling question arises: could some artificial agents eventually deserve moral consideration? While today’s virtual agents lack consciousness, future models with self-modifying behavior, memory, learning capacity, and social interaction might blur the line between tool and entity. If so, continuing to treat them as disposable could reflect a dangerous moral blind spot.
Finally, there is the issue of norm erosion. If simulations routinely explore unethical behavior without consequence, we risk normalizing those behaviors—not in virtual space, but in our minds and institutions. Just as media and gaming can influence attitudes, repeated exposure to unethical scenarios, even in simulations, might desensitize both researchers and the public to actions that should remain morally unacceptable.
What We Might Learn (and Lose)
The promise of simulation in psychology is not just theoretical. Simulated experiments could offer profound insights into some of the most complex and dangerous aspects of human behavior. With no real victims involved, researchers could model genocide, extremism, or mass propaganda campaigns to understand how they form, spread, and might be prevented. These are areas where traditional research methods are constrained by ethics, legality, and practicality. Simulation allows us to explore the unthinkable—for the sake of preventing it.
In addition, virtual environments allow for hypothesis testing on an unprecedented scale. Tens of thousands of experimental conditions can be tested in parallel, refining theories in ways that would take decades through human trials. This speed and breadth offer a powerful tool for sharpening psychological models, from decision-making to social influence.
The ability to simulate realistic populations also enables the creation of behavioural prediction models. These models could help public health experts forecast vaccine adoption, educators tailor learning environments, or urban planners understand how people move and behave under stress. When applied responsibly, this knowledge could enhance well-being across society.
But the same tools that promise insight also carry deep risks. One major danger is ethical drift—the slow replacement of moral reasoning with technical efficiency. As simulations reward outcomes over processes, researchers may begin to prioritize what works over what is right, especially when the consequences are abstract or invisible.
Simulated psychology also lends itself to behavioural engineering, a power that could be exploited by authoritarian regimes or profit-driven entities. What begins as research into persuasion could be adapted into manipulation. Data from simulated experiments might be used to shape public opinion, nudge behavior without consent, or suppress dissent.
Perhaps most troubling is the risk of creating systems optimized for unethical decisions. An AI trained purely on success metrics within morally ambiguous simulations may learn to lie, coerce, or disregard suffering. Without embedded values, these systems could become dangerously effective—and morally blind.
In pushing the boundaries of knowledge through simulation, we must remain alert to the boundaries we might unknowingly erase.
Toward a New Ethical Framework
To navigate the emerging frontier of simulated psychological research, we need an updated ethical framework—one that acknowledges both the absence of direct harm and the presence of indirect, societal, and philosophical consequences. Traditional ethics focused on protecting human participants, but in the era of simulations, we must consider how virtual experiments influence researchers, systems, and the real world they ultimately inform.
One starting point is the idea of a tiered ethical review system. Simple simulations involving basic decision rules or statistical models may require little oversight. However, experiments involving complex, adaptive agents that mimic learning, memory, or emotional response should trigger more scrutiny, especially if the content includes scenarios of manipulation, coercion, or violence.
For higher-risk research, institutions might adopt ethical sandboxes—controlled environments where sensitive simulations are conducted under close review, with limits on scale, purpose, and publication. These sandboxes would function like restricted labs, allowing exploration while containing the potential for misuse or normalization of unethical behavior.
We must also define new criteria for the moral relevance of simulated beings. While most current agents are not conscious, future developments in AI may produce entities that exhibit persistent identity, emotion modelling, or goal-seeking behaviour. Even if these qualities fall short of true sentience, they may warrant ethical consideration based on complexity and interaction depth.
Transparency should be another pillar. Simulations that influence public policy, corporate behaviour, or AI training must be accompanied by clear disclosure of their structure, goals, and limitations. Hidden models of human behaviour used to justify decisions can quietly embed bias or unethical assumptions into systems that affect millions.
Ultimately, even if simulated agents are not real minds, how we treat them reflects our values. Our behaviour in virtual space is not without moral weight. If we choose to explore without restraint, we risk becoming researchers who no longer distinguish between utility and humanity.
Simply Put
Simulations represent a powerful new frontier for psychological research, one that holds the potential to unlock insights impossible to access through traditional means. By eliminating the risk of direct harm to human subjects, they offer unprecedented freedom to explore the extremes of behavior, belief, and influence. But this freedom must not come at the cost of our ethical compass. As we push further into virtual experimentation, our frameworks for oversight and responsibility must evolve—not disappear.
We must be mindful that the tools we use and the scenarios we create shape not only our models, but our mindsets. As the philosopher Alasdair MacIntyre once noted, “What we do, we become.” In the context of simulations, this might be rephrased: we become what we practice—even in virtual space.
If simulations are to inform the systems that guide societies and the AI that interacts with people, then our conduct within them must be held to a meaningful ethical standard. Because in the end, how we treat the unreal may say the most about who we really are.
References
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct.