Introducing Source Lapse: Forgetting Whether We Spoke to a Human or an AI
One evening, while revisiting a half-formed idea, I found myself caught in a peculiar uncertainty. I could recall the shape of the thought and the dialogue it produced, but I could not remember whether I had shared it with a trusted colleague or with an artificial intelligence system. The content of the exchange lingered vividly, yet the identity of my conversational partner had blurred. This uncertainty did not feel trivial. It suggested a new kind of cognitive confusion, one that existing psychological categories can gesture toward but do not quite capture.
This experience is increasingly likely to become common as human–AI interaction expands in scale, intimacy, and normality. For centuries, people have relied on distinctions between human and nonhuman interlocutors as an anchor for memory. We might forget what was said or even when it was said, but the assumption that we could recall who we were speaking to felt stable. The rise of AI assistants and conversational models has begun to erode this assumption. The question is no longer only whether a system can mimic human speech but whether we can reliably track the origin of our own conversations across human and machine partners.
To address this emerging phenomenon, I propose the term Source Lapse. The phrase captures a lapse in remembering the source of dialogue, specifically the uncertainty between human and AI as conversational agents. While the concept has roots in established areas such as source monitoring error, Source Lapse names a distinctive challenge of our time, situated at the intersection of psychology, discourse analysis, and the everyday phenomenology of interaction with artificial systems.
In what follows, I will situate Source Lapse in relation to existing psychological concepts, explore its cognitive and discursive underpinnings, and consider its implications. I will also note two adjacent terms that complement it: Turing Slip, a more playful and popular expression, and Source Drift, a poetic variant better suited to philosophical or cultural discourse. Together these three expressions offer a vocabulary for grappling with a phenomenon that is likely to shape how we remember conversations in the age of artificial dialogue.
Source Monitoring and Its Limits
In psychology, the framework most relevant here is source monitoring. Marcia Johnson and colleagues developed this concept in the 1990s to describe how people attribute memories to their origins. When recalling a fact, one must often judge whether it came from direct experience, from imagination, from a book, or from another person. Failures in this process are called source monitoring errors. For example, someone might misremember a dream as a real event or believe they heard a story from a friend when they actually read it online.
Source monitoring theory highlights the cues people use in these judgments. Sensory detail, contextual richness, and emotional tone all contribute to the decision of where a memory belongs. Human speech typically carries idiosyncratic markers such as hesitations, accent, or affective warmth. Imagined events, by contrast, tend to feel less detailed and more schematic. These features guide the mind in sorting memories into categories of origin.
The rise of large language models complicates this framework. AI text often lacks the hesitations and quirks of human talk, yet it is increasingly polished, contextually aware, and emotionally resonant. The cues that once reliably separated human and nonhuman contributions are eroding. When the style of AI language overlaps with the fluency of human discourse, the cognitive processes underlying source monitoring face new ambiguity.
This is where Source Lapse comes in. It is not merely a subset of source monitoring error. Rather, it is a distinctive case in which two categories that used to be radically different—human and machine—now overlap in ways that challenge our attribution mechanisms.
Agent Confusion in Human–AI Interaction
In the field of human–computer interaction, researchers have begun to document agent confusion. This term describes the difficulty users experience in attributing authorship or agency in mixed settings where human and AI contributions interleave. For example, in collaborative writing platforms where AI tools suggest sentences, users sometimes forget whether a particular phrase was their own or the machine’s. In customer service encounters mediated by chatbots, clients may believe they are speaking to a human agent until they encounter a phrase that signals automation.
Agent confusion emphasizes authorship in the present moment, whereas Source Lapse emphasizes memory of past interactions. They are linked phenomena that sit on a continuum. In real-time interaction, one may be unsure whether an utterance belongs to a human or a machine. Later, one may struggle to remember who the dialogue partner was in the first place.
Digital Amnesia and Anthropomorphic Slips
Other concepts point to the broader ecology in which Source Lapse emerges. One is digital amnesia, the tendency to forget information because it is stored externally on devices. For example, fewer people memorize phone numbers today because smartphones make such recall unnecessary. Digital amnesia captures the outsourcing of memory, but it does not address the uncertainty of who we engaged with. It is about content storage rather than dialogic origin.
Another is the anthropomorphism slip, where users treat AI systems as if they were human partners. This reflects our deep-seated tendency to perceive intentionality and social presence, even in algorithms. Anthropomorphism slips make it more likely that Source Lapse will occur, since attributing humanness to AI blurs distinctions in memory.
Why a New Term is Needed
Despite their relevance, none of these concepts fully captures the unique experience of forgetting whether one conversed with a human or an AI. Source monitoring error is too broad. Agent confusion focuses more on authorship than recollection. Digital amnesia concerns content, not source. Anthropomorphism slips highlight projection, not memory.
Source Lapse fills this conceptual gap. It specifies a lapse in source attribution in a communicative ecology where human and AI voices cohabit. Naming the phenomenon allows us to study it systematically, compare it across populations, and develop coping strategies. Without a term, experiences like my own risk being dismissed as trivial quirks. With a term, they can be recognized as indicators of deeper cognitive shifts in how we orient to interlocutors.
The Psychology of Source Lapse
Psychologically, Source Lapse arises when the cues that normally differentiate interlocutors are either too similar or too weak. In traditional source monitoring, differences between imagination and perception are marked by richness of detail. In human–AI discourse, the markers of humanness are subtle. Polished AI responses can mimic grammar, rhetoric, and even empathy. Human responses in digital text often lack paralinguistic cues such as tone or gesture. The gap between human and AI registers narrows.
Because memory is reconstructive rather than archival, we do not store conversations as recordings but rebuild them from fragments. When those fragments lack strong source indicators, the mind may place the memory in the wrong category. Source Lapse is not a failure of memory per se, but of attribution.
Discourse and the Blurring of Voices
Discourse analysis provides another perspective. Conversation is not only about content but about the framing of voices. Bakhtinian theory emphasizes dialogism, the way every utterance responds to and anticipates others. In the digital age, our dialogic partners now include nonhuman systems trained on vast corpora of human text. Their voices are synthetic yet deeply shaped by human patterns.
When dialogic boundaries blur, our ability to locate a voice within memory weakens. A human colleague may use idioms or contextual references that distinguish them, but an AI trained on massive datasets can easily echo such patterns. The distinction between unique voice and statistical voice becomes porous. Discourse scholars may see Source Lapse as a symptom of this erosion of dialogic boundaries in the age of machine interlocutors.
Adjacent Variants: Turing Slip and Source Drift
While Source Lapse is proposed here as the formal psychological and scholarly term, it is useful to acknowledge adjacent expressions that capture the same terrain from different angles.
Turing Slip is a playful and media-friendly variant. It evokes the “Freudian slip,” while nodding to Alan Turing’s famous test of machine intelligence. It is memorable, witty, and well suited for popular press, everyday conversation, and cultural commentary. Where Source Lapse might appear in academic journals, Turing Slip might circulate in headlines and social media posts.
Source Drift is a more poetic and philosophical expression. Whereas Source Lapse implies a momentary break, Source Drift suggests a gradual blurring of origin over time. It may resonate in fields like philosophy of mind, cultural studies, or literary theory. Source Drift frames the phenomenon not as an error but as an ongoing condition of life in an AI-saturated environment.
Together, these three terms form a useful cluster. Source Lapse serves as the rigorous, research-oriented anchor. Turing Slip provides a catchy shorthand for public discourse. Source Drift offers a reflective register for humanistic inquiry.
Implications for Society
Source Lapse has implications for trust, accountability, and ethics. If we cannot recall whether advice came from a human expert or an AI assistant, our ability to evaluate its authority is compromised. In legal, medical, or educational contexts, this could have serious consequences. Even in daily life, blurred recall might affect how we assign credit for ideas or how we interpret relationships.
There is also an emotional dimension. Forgetting whether one confided in a friend or in a machine unsettles one’s sense of intimacy. Friendships and collaborations are built on memory of shared interaction. If those memories are clouded by Source Lapse, the fabric of relational life shifts.
Moving Forward
Recognizing Source Lapse encourages several avenues of response. Psychologists can study its prevalence using experiments with mixed human–AI conversations. Designers might incorporate subtle markers of artificial authorship, supporting users in tracking sources. Educators can develop literacy strategies that help people reflect on the differences between human and AI partners.
At the same time, we should acknowledge that some degree of Source Lapse may be inevitable. As AI becomes more deeply integrated, memory categories themselves may evolve. Just as we no longer sharply distinguish oral from written sources, future generations may treat human and AI voices as part of a single communicative field. The concept of Source Lapse may then serve not only as a diagnosis of error but as a window into shifting norms of memory and dialogue.
Simply put
The experience of not remembering whether I spoke to a person or to an AI highlights a new dimension of memory shaped by technological change. Psychology provides a foundation through source monitoring theory, but the rise of AI interlocutors demands a new vocabulary. Source Lapse names this uncertainty, offering a precise and scholarly term. Alongside it, Turing Slip provides a colloquial variant, and Source Drift offers a philosophical one.
By coining and exploring these terms, we open space for research, reflection, and adaptation. They are not merely labels but tools for grappling with how profoundly the boundaries of conversation, memory, and identity are being redrawn in the age of artificial dialogue.