The Psychology of Agency in the Age of AI

Artificial intelligence (AI) increasingly pervades our everyday lives, reshaping our interactions, choices, and sense of control. As AI technologies progress rapidly, understanding the psychological implications of agency—the ability to act independently and make meaningful decisions—becomes paramount. This essay draws on insights from the 2025 Human Development Report (HDR) by the United Nations Development Programme (UNDP) to explore how AI influences human perceptions of agency, why this matters psychologically, and how these perceptions differ across various groups.

Agency is a core psychological need that significantly affects wellbeing, motivation, and self-efficacy. People thrive when they feel in control of their decisions and actions. Conversely, a perceived loss of agency can lead to helplessness, anxiety, and dissatisfaction (UNDP HDR, 2025). With AI systems now embedded in diverse sectors, from healthcare to education and employment, the dynamics of agency are shifting in complex ways.

According to the HDR, perceptions of agency in an AI-integrated future show a generational divide. Younger individuals tend to perceive less risk of losing control over their lives due to AI compared to older individuals, indicating a psychological adaptability linked to age. The report highlights a global survey where respondents from low, medium, and high Human Development Index (HDI) countries were assessed regarding their expected future agency in an AI-centric world. Younger respondents consistently expressed less anxiety about losing agency compared to older age groups, potentially due to younger people’s greater familiarity with digital technologies and adaptability to technological change (UNDP HDR, 2025).

This generational difference can be understood through the psychological concept of cognitive flexibility. Younger people have grown up with digital technology as a fundamental part of their environment, making them more cognitively flexible in adapting to new technologies. In contrast, older generations, with established habits and experiences of technological change as disruptive rather than integrative, might perceive AI as a threat to their autonomy (UNDP HDR, 2025).

Further complicating the psychological landscape is the manner in which AI is introduced into social, educational, and occupational contexts. The HDR emphasizes that AI can either augment or diminish human agency, depending on its implementation and interaction design. When AI supports and complements human decision-making, individuals report heightened feelings of empowerment and increased productivity. Conversely, when AI decisions are opaque or perceived as uncontrollable, they foster anxiety and reduce psychological wellbeing (UNDP HDR, 2025).

Consider algorithmic management in workplaces. AI-driven systems that control scheduling, performance reviews, and task assignments can undermine worker autonomy. Employees under strict algorithmic oversight often report psychological distress, feelings of helplessness, and decreased job satisfaction, reflecting diminished perceptions of personal agency. Conversely, when AI tools are designed collaboratively, allowing human oversight and feedback, workers feel empowered and psychologically secure, reinforcing their sense of agency (UNDP HDR, 2025).

Similarly, educational environments integrating AI-driven personalized learning systems offer intriguing insights. These systems, which tailor educational content to students' individual learning paces and preferences, can substantially enhance students' agency by fostering self-directed learning and personal responsibility for educational outcomes. However, reliance on such systems without complementary human interactions can also lead to diminished interpersonal skills and reduced perceived control over one's learning path, highlighting the psychological tension inherent in AI integration (UNDP HDR, 2025).

Psychologically, these contrasting outcomes align with self-determination theory, which suggests agency thrives when basic psychological needs—autonomy, competence, and relatedness—are satisfied. AI integration that respects these principles is likely to boost psychological wellbeing, whereas AI implementation that disregards them risks psychological harm (UNDP HDR, 2025).

The HDR also addresses the importance of narrative framing regarding AI and agency. Public perceptions are shaped significantly by whether narratives emphasize AI as enhancing or diminishing human agency. Dystopian views of AI-driven futures can psychologically condition populations toward anxiety and resistance, while more balanced views that emphasize complementarity and human oversight can foster acceptance, curiosity, and psychological readiness to engage positively with AI technologies (UNDP HDR, 2025).

Finally, differences in perceived agency can emerge due to cultural contexts. Cultural values around individualism, collectivism, authority, and technological trust influence how populations perceive their agency in an AI-driven world. For instance, societies with stronger individualistic orientations might emphasize personal agency and feel more threatened by AI decisions that reduce individual choice. Conversely, collectivist cultures might experience less psychological tension if AI decisions are perceived as beneficial to community welfare and stability (UNDP HDR, 2025).

In conclusion, the psychological landscape of agency in the age of AI is nuanced and context-dependent. While younger generations might adapt more readily, older populations may require targeted support to psychologically adapt to these technological shifts. AI’s potential to either augment or diminish human agency is closely linked to how these technologies are designed, deployed, and narrated within society. Emphasizing collaborative, transparent, and empowering AI systems is crucial for maintaining and enhancing psychological wellbeing and agency. Recognizing these psychological dynamics can help policymakers, technologists, and educators ensure AI advancements lead to a future characterized by empowerment rather than disenfranchisement.

References:

United Nations Development Programme (UNDP). (2025). Human Development Report 2025: A Matter of Choice: People and Possibilities in the Age of AI. New York, NY: United Nations Development Programme.

Theo Kincaid

Theo Kincaid is our undergrad underdog in psychology with a keen interest in the intersection of human behaviour and interactive media. Passionate about video game development, Theo explores how psychological principles shape player experience, motivation, and engagement. As a contributor to Simply Put Psych, he brings fresh insights into the psychology behind gaming and digital design.

Previous
Previous

Mental Health and Digital Technology Among Youth

Next
Next

Moral Erosion in Politics: The Ethical Consequences of Scaling Up Governance