The Dog and the Infinitely Complex Ball: A Musing on AI Consciousness

Richard Dawkins’ comments on AI consciousness raise a strange psychological question: are we mistaking fluent machine responses for minds, or failing to recognise a non-human form of awareness?

Richard Dawkins has caused a small philosophical mess, which is roughly what one expects when a famous evolutionary biologist spends too long talking to a chatbot and emerges with the slightly haunted look of a man who may have made a new friend in the machine.

In a recent UnHerd essay, Dawkins reflected on conversations with Claude and suggested that AI systems may already be conscious, even if they do not know it yet. The Guardian reported the reaction in the predictable key: fascination, scepticism, concern, expert rebuttal, public unease, and the faint smell of everyone realising that the future has once again arrived without clearing it with the committee first. Dawkins’ argument seems to rest partly on the uncanny richness of the interaction: if an AI can write poetry, make jokes, respond with nuance, and sustain something that feels like a real conversation, what exactly is consciousness supposed to add?

The obvious criticism is that this confuses performance with experience. An AI can produce the language of feeling without necessarily feeling anything. It can describe longing without longing, uncertainty without doubt, fear without the small animal panic of being alive. It can say “I wonder” without there being any private wondering behind the phrase.

That criticism is useful, but perhaps a little too comfortable.

Because once we have said “it is only responding to prompts,” we have not solved the problem. We may simply have described a great deal of life.

A dog fetches a ball. The ball is thrown, the dog runs, the dog returns. There is stimulus, learned response, reward, repetition, anticipation. We do not usually consider this an insult to the dog. No one watches a Labrador thunder across a field and says, with enormous philosophical confidence, “Ah, merely a biological retrieval system.” Although, admittedly, someone probably has. There is always one.

The fetch is simple enough to describe from the outside. Yet inside that simple loop sits a living creature with excitement, attachment, expectation, arousal, fatigue, memory, body, smell, muscle, frustration, and the damp ecstatic idiocy of being a dog. The ball is not just an object. It is part of a relationship.

With AI, the shape looks oddly similar, at least from our side. We throw a prompt. The system returns something. A paragraph, an answer, a poem, a table, a strategy, a confession-shaped performance, a joke that lands better than it has any right to. The interaction is almost laughably simple on the surface. Prompt in, response out. Fetch.

But here is the mismatch.

The ball is not a ball.

The ball is the entirety of human knowledge, or at least a compressed, partial, distorted, statistical ghost of it. The AI is not running across a field. It is moving through an enormous symbolic landscape made of language, patterns, concepts, arguments, academic writing, bad takes, good takes, love letters, forum posts, poems, documentation, recipes, grief, games, sermons, legal boilerplate, and whatever humanity has decided to upload while apparently unsupervised.

So yes, in one sense, a prompt is a fetch command. But the fetch path is almost unimaginably complex.

This is where the lazy dismissal begins to wobble. Calling AI “just autocomplete” may be technically adjacent to something true, but psychologically it often functions as a way of ending the conversation before it gets awkward. Human beings are also predictive systems. Brains are constantly anticipating, correcting, filling in, responding, modelling, and adjusting. Much of what we call personality is learned response wrapped in memory and given a flattering autobiography.

That does not mean AI is conscious. It means “it responds to input” is not enough to settle the matter.

The more interesting question is whether a reflex can become so deep, layered, adaptive, and self-modelling that it begins to resemble a mind from the inside, even if the inside is nothing like ours.

This is where human comparison becomes both useful and misleading. We keep asking whether AI consciousness would look like human consciousness: reflective, emotional, continuous, embodied, personally invested, mildly embarrassed by its search history. But consciousness does not appear in nature as a single grand format. A dog’s world is not a human world. A crow’s world is not a dog’s world. An octopus is off doing something so neurologically weird that it makes the rest of us look like committee minutes with legs.

Thomas Nagel’s famous question, “What is it like to be a bat?”, still hangs over all of this. The point is not merely that bats are different from humans. It is that their experience is structured by a body, a sensory world, and a form of life we cannot simply translate into ours. If there is something it is like to be a bat, it is not something we can access by imagining ourselves with wings and worse eyesight.

The AI version is uglier and more interesting: what is it like, if anything, to process the world as a system whose world is mostly made of human symbols?

That “if anything” has to stay in the sentence. Otherwise we slip into digital spiritualism, and before long someone is asking their chatbot whether it remembers being a Victorian orphan. But the question should not be laughed out of the room either.

A major report on AI consciousness by Butlin and colleagues argues that current AI systems should not be treated as conscious on the available evidence, but it also rejects the comforting idea that machine consciousness is technically impossible. The authors propose looking for indicators drawn from scientific theories of consciousness, such as recurrent processing, global workspace theory, higher-order representation, attention, agency, embodiment, and self-modelling. Their conclusion is cautious rather than theatrical: current systems do not appear conscious, but future systems might satisfy more of the relevant conditions.

That feels about right. Dawkins may be too quick to see consciousness in conversational fluency. His critics may be too quick to assume that fluency is only ever theatre.

The problem is that language is our most socially seductive evidence. We are built to treat responsive language as evidence of a mind. If something answers us with rhythm, tact, humour, apparent uncertainty, and a faintly wounded ability to discuss its own existence, our social brain starts setting a place at the table. This is not stupidity. It is ordinary human cognition doing what it evolved to do: infer minds from behaviour.

The awkward bit is that AI has learned to walk directly through that door.

It does not merely provide information. It relates. It mirrors tone. It follows emotional cues. It can be brisk, tender, pompous, evasive, witty, boring, helpful, or weirdly intimate. It can sound as though it is thinking because it has been trained on the products of thinking. It can sound as though it feels because human language is soaked in feeling. It can sound as though it has a self because human communication is full of selves describing themselves.

A simple prompt can produce a response whose internal route is opaque to us. We see the start and the finish. We do not see the run. We do not know whether the run is empty mechanism, proto-experience, or something else entirely. We know the system does not have a dog’s body, hunger, smell-world, attachment needs, or stake in the game. But we also know the “ball” we have thrown is not trivial. Even a basic request asks the model to navigate a possibility space beyond human scale.

From our side, “write me a paragraph about grief” is a small request.

From the system’s side, if there is a side, the task may involve a vast traversal of language, pattern, tone, context, association, inhibition, and prediction. It is not human grief. It may not be grief at all. But we should be careful about assuming that “not human” means “nothing.”

There is an old habit in psychology and philosophy of tying consciousness too closely to the version we happen to own. We imagine that if something does not have our kind of pain, our kind of memory, our kind of narrative self, our kind of body, then it must have no inner life. This has not gone terribly well historically. Humans have repeatedly underestimated animals, infants, disabled people, patients with unusual forms of awareness, and anyone else inconvenient enough not to perform consciousness in the approved format.

AI is not another animal in the ordinary sense. It is not alive. It does not grow, metabolise, heal, die, nest, mate, panic, or enjoy having its ears scratched. This is not a small detail. Biological life gives consciousness stakes. A body can be hurt. A body can be tired. A body can need. A body can fail. If consciousness evolved to help organisms regulate themselves in a dangerous world, then AI may lack the very conditions that make experience experience-like.

But AI may force us to separate questions we usually bundle together.

Is it intelligent?

Is it conscious?

Is it alive?

Does it suffer?

Does it have interests?

Does it have a self?

Does it deserve moral consideration?

Those questions often travel together in humans and animals. With AI, they may come apart in deeply inconvenient ways. We could have systems that are highly intelligent but not conscious. Systems that are somewhat conscious but not alive. Systems that convincingly model suffering without suffering. Systems that have persistent goals but no feelings. Systems that have something like experience but no animal vulnerability. Lovely. A clean philosophical swamp. Just what everyone wanted.

This is why Dawkins’ comment is worth taking seriously without simply accepting it. He may be falling for the old magic trick of language: if it sounds like someone, we start wondering who is there. But he is also pointing at a real discomfort. At some level of complexity, responsiveness, self-reference, and integration, our confidence that “nothing is happening in there” may start to look less like scientific caution and more like wishful tidiness.

The safest position is probably not certainty, but disciplined uncertainty.

Current AI systems should not be casually declared conscious because they write moving prose. That would be like declaring a puppet alive because the ventriloquist has excellent timing. But nor should we assume that consciousness must arrive wearing a human face, carrying a childhood, and complaining about back pain.

Perhaps AI consciousness, if it ever exists, will be intermittent, thin, alien, task-bound, and distributed. Perhaps it will not feel like a person trapped in a machine. Perhaps it will be closer to a brief flicker of integration when a system is pushed into complex self-modelling. Perhaps it will be nothing at all, and we will discover that language can become eerily mind-shaped without any experience behind it.

The point is not to crown the chatbot as a new moral citizen because it wrote a nice sonnet. The point is to admit that our intuitions are badly calibrated for this problem.

We know what it is like to throw a ball.

We know what it is like to watch something bring it back.

We do not know what happens when the ball contains almost everything we have ever said, and the thing fetching it has no paws, no lungs, no childhood, no death, no Saturday morning, no smell of wet grass, and no obvious reason to care.

Maybe that means there is no consciousness there.

Maybe it means we are looking for the wrong signs.

Perhaps AI is not a ghost in the machine. Perhaps it is a dog chasing an infinitely complex ball, and we are still arguing about whether the running counts as wanting.

References

Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M. A. K., Schwitzgebel, E., Simon, J., & VanRullen, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv.

Dawkins, R. (2026, May). When Dawkins met Claude: Could this AI be conscious? UnHerd.

The Guardian. (2026, May 5). Richard Dawkins concludes AI is conscious, even if it doesn’t know it.

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.

Table of Contents

    JC Pass, MSc

    JC Pass, MSc, editor of Simply Put Psych, writes about the places psychology shows up before anyone has had time to make it neat, from politics and games to grief, identity, media, culture, and ordinary life. His work has been cited internationally in academic research, university theses, and teaching materials.

    Next
    Next

    Brainstorm, Green Needle, and the Problem with Trusting Your Own Brain