Not the Hill to Die On: Why AI Will Not Follow NFTs into Oblivion

Public debates about generative AI continue to divide sharply into two camps: the fervently dismissive and the uncritically enthusiastic. A recurring argument within the rejectionist camp asserts that AI is merely the next iteration of speculative tech hype, another blockchain, another metaverse, another NFT bubble destined to collapse under the weight of its own emptiness. On social platforms, this view circulates in concise, cathartic formulations: AI is here to stay from the same people who said NFTs were the future. The implication is clear. AI will fade alongside those failed promises, and if we simply resist hard enough, we can accelerate its demise.

But this comparison, however emotionally satisfying, collapses under scrutiny. It confuses categories, ignores history, and misdiagnoses the very thing critics are concerned about. More importantly, it directs public frustration toward the wrong target. If we are to have a meaningful conversation about the future of AI, its risks, its abuses, and its place in the creative and intellectual landscape, we need to disentangle the rhetoric. AI is not the new NFT, and treating it as such obscures the real issues at stake.

The Category Error: AI Is Not One Thing

At the heart of the NFT analogy is a fundamental misunderstanding. The word AI has become a catch-all term that lumps together everything from airplane navigation systems to image generators. This is similar to using the term literacy to encompass reading, law, graphic design, and code.

In reality, the backlash focuses on one specific subfield: large language models and generative systems. These tools are indeed the most publicly visible and culturally disruptive forms of AI, but they represent only a small portion of the broader field.

Artificial intelligence long predates the current hype cycle. It is embedded in:

  • search engines

  • fraud detection

  • medical imaging

  • cybersecurity

  • industrial automation

  • video game NPC behavior

  • accessibility technologies

  • Photoshop, Lightroom, and nearly every modern creative suite

To declare that AI will vanish the way NFTs did is to ignore the extent to which AI is already infrastructural. NFTs were an optional novelty. AI is a deeply interwoven component of contemporary digital life. One cannot sensibly reject it wholesale without rejecting the very tools and services that shape modern communication, commerce, and creativity.

Utility and Speculation: Why NFTs Collapsed and AI Will Not

NFTs thrived on artificial scarcity, speculative markets, and the promise of future value untethered from practical function. They were cultural artifacts of financial optimism rather than technological necessity.

AI, by contrast, is overwhelmingly use driven. Even those who denounce generative systems rely on AI infused tools every day, from spellcheckers to recommendation algorithms to smartphone cameras. The value proposition is not hypothetical. It is observable, repeatable, and often invisible by design.

When a technology becomes mundane, it is usually a sign of maturity rather than impending collapse. The fact that AI is frequently taken for granted is the clearest indication that it is not a fad.

What Critics Are Actually Afraid Of

The rejectionist position is not rooted in meaninglessness, nor should it be dismissed. In many cases, the fear of AI is a fear of generative tools specifically, tools that appear to encroach on creativity, authorship, and human expression. LLMs and image generators disrupt identity defining domains such as writing, art, scholarship, and craft. These are areas where people locate agency and personal worth, and disruption there feels existential.

But that unease should not be mistaken for evidence that the technology itself lacks value. Rather, it indicates that value is located precisely where the disruption occurs: in language, creativity, communication, and cognition. These are not peripheral cultural spaces. They are central.

NFTs failed because they offered nothing.
AI is contested because it offers too much.

Profitability, Infrastructure, and the Persistence of Tools

Another common argument claims that AI is not financially sustainable, that LLMs are too costly, too energy intensive, or too dependent on investor optimism. This critique contains truth. The current scale of frontier models is expensive and often inefficient. But the conclusion that AI is therefore doomed misunderstands how technologies become permanent components of society.

Profitability is not a prerequisite for infrastructural persistence. Many foundational technologies spent decades as cost centers before becoming ubiquitous, including:

  • early internet services

  • GPS

  • search engines

  • digital photography

  • cloud computing

More importantly, AI is already profitable, just not always in the consumer facing products that attract public attention. It saves corporations billions in logistics, analysis, and automation. It powers fraud prevention, translation, indexing, and supply chain optimization. It may be unevenly profitable, but it is not a bubble propped up by speculative belief alone.

NFTs required hype to survive.
AI requires only continued utility.

The Wrong Hill to Die On

The impulse to reject AI entirely is understandable, but strategically it is misdirected. Even if every frontier model shut down tomorrow, AI would remain embedded in nearly every technological system we use. The idea that we can eliminate AI through cultural resistance is a psychological comfort rather than a policy position.

What we can address, and what matters far more, are the ethical, social, and regulatory questions:

  • What training data should be permissible?

  • How should attribution and consent be handled?

  • What environmental costs are acceptable?

  • How do we prevent algorithmic discrimination?

  • How can academic integrity be preserved?

  • What transparency should be required?

  • How do we ensure equitable access rather than corporate monopoly?

These are battles worth fighting. These are battles that shape the future.

Declaring AI a passing fad distracts from the necessary work of governance, oversight, and responsible integration.

Simply Put

If the goal is to protect human creativity, dignity, and autonomy, then the conversation cannot begin with denial. It must begin with engagement. The third way is not cheerleading or surrender, but intentional integration: the recognition that tools are shaped by their social context and that we have the responsibility to articulate what ethical use looks like.

AI is not destined to collapse under its own weight. Nor is it destined to replace us. It will, like every transformative tool before it, become part of our cultural, intellectual, and creative environment. The question is not whether we can extinguish it, but how we choose to live with it and how we ensure that living with it reflects our values rather than undermines them.

The future of AI will not be defined by hype cycles or social media debates. It will be defined by the structures, norms, and boundaries we build now. And that requires discourse grounded in reality rather than analogy.

AI is not a bubble. It is a tool. The challenge ahead is not to reject it, but to shape it.

References

Carr, N. (2010). The shallows: What the Internet is doing to our brains. W. W. Norton.

Csikszentmihalyi, M. (1996). Creativity: Flow and the psychology of discovery and invention. HarperCollins.

Dafoe, A. (2018). AI governance: A research agenda. Governance of Artificial Intelligence Program, Future of Humanity Institute, University of Oxford.

Davis, E. (2004). Techgnosis: Myth, magic, and mysticism in the age of information. Harmony Books.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.

McLuhan, M. (1994). Understanding media: The extensions of man (Original work published 1964). MIT Press.

Murray, J. H. (2012). Inventing the medium: Principles of interaction design as a cultural practice. MIT Press.

Norman, D. (2013). The design of everyday things (Revised ed.). Basic Books.

Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Addison-Wesley.

Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations. Cambridge University Press.

Scharff, R. C., & Dusek, V. (Eds.). (2014). Philosophy of technology: The technological condition (2nd ed.). Wiley-Blackwell.

Shirky, C. (2008). Here comes everybody: The power of organizing without organizations. Penguin Press.

Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.

Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.

Table of Contents

    JC Pass

    JC Pass, MSc, is a social and political psychology specialist and self-described psychological smuggler; someone who slips complex theory into places textbooks never reach. His essays use games, media, politics, grief, and culture as gateways into deeper insight, exploring how power, identity, and narrative shape behaviour. JC’s work is cited internationally in universities and peer-reviewed research, and he creates clear, practical resources that make psychology not only understandable, but alive, applied, and impossible to forget.

    https://SimplyPutPsych.co.uk/
    Previous
    Previous

    Roidian Psychology: Navigating the Brotosphere

    Next
    Next

    The Third Way: Reframing the AI Debate Through Psychology, History, and the Evolution of Creative Tools