Learning to Steal: AI, Art, and the Thin Line Between Influence and Infringement

Are AI and Human Creativity Fundamentally Different?

“If both humans and machines learn by seeing and doing, why is one called inspiration and the other called theft?”

In the public discourse surrounding generative AI, especially within the visual arts, a central tension has emerged: when a human creates something influenced by prior work, it is often described as “inspired” a natural part of the creative process. Yet when an AI model does something functionally similar, it is frequently labeled as “theft,” “plagiarism,” or “soulless mimicry.” This apparent double standard raises a foundational question: are AI systems and human artists truly engaging in the same kind of learning or are we misunderstanding one or both?

At the surface level, the processes appear similar. Human beings learn by observing, imitating, and practicing. An artist studies the works of others, absorbs stylistic tendencies, and eventually synthesizes those inputs into something new. Likewise, a generative AI model like Stable Diffusion or Midjourney is trained on massive datasets of visual art, learning patterns, composition, and stylistic markers from thousands or even millions of existing images, and then producing novel outputs based on that training.

But beneath the surface, key differences in mechanism, intention, and context begin to surface:

  • Cognitive vs statistical learning: Human learning is experiential and embodied; tied to memory, emotion, and intention. AI models learn statistically, mapping correlations between data points to generate outputs that resemble their training distribution.

  • Agency and meaning-making: Human creators typically work with intention, cultural context, and an awareness of audience. AI systems lack self-awareness, intent, or understanding. They do not “know” they are creating, and they cannot explain the meaning of their outputs.

  • Consent and control: When a human learns from another artist, it is usually through publicly shared work, education, or mentorship often within understood cultural norms. AI models, by contrast, are trained on scraped datasets that frequently include copyrighted or personal content, often without consent or attribution.

These differences contribute to the emotional and ethical dissonance many feel when engaging with AI-generated art. The machine may produce something visually compelling, but the process feels disembodied, invisible, or exploitative. It can appear to bypass the very struggles, influences, and iterative failures that define human creativity.

Yet at the same time, the similarities are too significant to ignore. Human creativity is deeply derivative; virtually no artist creates in a vacuum. Every style, genre, or movement in art history emerged from a lineage of influence and transformation. From classical painters emulating Greco-Roman sculpture, to modern digital artists drawing on anime or pop culture, imitation and recombination have always been core components of artistic growth.

So the paradox becomes clear:

If we accept that humans create by learning from others, why is machine learning held to a different ethical standard?
Or more provocatively: Is our discomfort with AI art rooted in technical distinctions or in existential anxieties about what it means to be human in the age of intelligent machines?

This essay will explore the tensions of legal, psychological, philosophical, and practical to better understand where the line should be drawn between innovation and exploitation, inspiration and infringement. The answers are not binary, but the questions are urgent.

The Psychology of Authorship: What Does It Mean to Create?

“Art is theft.” — Pablo Picasso (probably apocryphal, but telling)

To understand the ethical tensions around AI-generated art, we need to examine a more fundamental question: What do we mean when we say someone “created” something? Creativity, far from being a magical or isolated act, is rooted in imitation, memory, and cultural participation. Human beings are pattern-recognizing, remixing creatures we build on what we’ve seen, internalize it, and transform it through our own lens of experience.

The Myth of Originality

The idea of the lone genius artist producing something wholly new is a romantic and largely mythical construction. In truth, all artists are shaped by their influences:

  • William Blake borrowed from classical mythology.

  • The Impressionists studied Japanese woodblock prints.

  • Tarantino films are mosaics of genre homage and cinematic callbacks.

Harold Bloom, in The Anxiety of Influence, argued that all poets (and by extension, all creators) are locked in a struggle with their predecessors, trying to find space within a tradition that they are simultaneously shaped by and resisting. Creativity, in this sense, is not invention ex nihilo, it’s reinterpretation.

AI-generated art challenges this by performing a similar act of recombination, yet without consciousness, struggle, or personal intention. The machine does not “wrestle with influence.” It does not seek meaning. It operates on patterns and probabilities, not internal narrative or vision.

The Human Creative Process

From a cognitive science perspective, human creativity emerges from a network of psychological processes:

  • Long-term memory: storing images, sounds, and concepts across time

  • Semantic association: connecting disparate ideas through metaphor and analogy

  • Embodiment: perceiving the world through a lived, physical experience

  • Emotion and motivation: creating as a response to joy, grief, identity, or conflict

An artist does not just “output” something based on inputs. They reflect, feel, experiment, and fail often repeatedly. The final product is embedded with that invisible labor, and its meaning is colored by the creator’s life, culture, and intention.

AI, by contrast, lacks an inner world. It does not know what it is producing. It does not perceive beauty or understand irony. While it can convincingly replicate form, it cannot intend meaning. This absence unsettles us not because the outputs are invalid, but because we cannot locate a self behind them.

“Who is speaking when the machine creates?”

This echoes Roland Barthes’ seminal argument in The Death of the Author that meaning resides not in the creator, but in the reader’s interpretation. Yet even Barthes was speaking about humans, whose subjectivity and cultural context were implicit. When we remove the human entirely, we’re left with a new interpretive challenge: Can a work of art still be meaningful if its creator lacks intent or consciousness?

The Role of Narrative in Perceived Value

Studies in psychology and behavioural economics suggest that people value artworks not just for their aesthetic properties, but for the story behind them:

  • A painting by a child might be cute but becomes profound when revealed to be by an autistic savant.

  • A song created by a grieving musician feels more meaningful than one made from algorithmic trend analysis.

AI-generated art often lacks this narrative. There is no personal journey, no trauma or triumph and without that story, some argue, the art feels hollow, even if it is technically brilliant.

The machine can create beauty, but can it create meaning?

The Core Tension

This section reveals one of the core psychological tensions of AI in art: the mechanics of creativity may be similar between humans and machines, but the experience is fundamentally different.

  • We trust human creators to have intention, emotion, and identity.

  • We don’t trust machines to have authorship only output.

This dissonance shapes how we assign value, legitimacy, and moral standing to creative works. The ethical debate over AI art cannot be disentangled from this deeply human desire for meaning, story, and soul.

Legal Frameworks: Fair Use, Copyright, and the Gaping Holes

“The law lags behind technology.”
— Every intellectual property lawyer, ever

As generative AI has rapidly evolved, it has exposed glaring gaps in current copyright frameworks, laws largely built for a pre-digital world. AI systems like DALL·E, Midjourney, and Stable Diffusion operate in ways that challenge core assumptions in intellectual property law: Who is an author? What constitutes copying? Is training on copyrighted content legal? These are unresolved questions at both national and international levels.

This section breaks down the key concepts, legal precedents, and emerging cases shaping the future of AI and creative ownership.

Copyright 101: The Basics

Under most copyright systems (especially in the U.S., EU, and UK), copyright protects:

  • Original works of authorship

  • Fixed in a tangible medium (e.g., an image, a recording)

  • That show a modicum of creativity

Copyright does not protect:

  • Ideas, facts, styles, or techniques

  • Works not created by humans (more on this in a moment)

The rights granted to copyright holders include:

  • Reproduction

  • Distribution

  • Derivative works

  • Public performance and display

These rights last for the lifetime of the author plus decades (e.g., 70 years in the U.S. and EU).

Fair Use: The Classic Defense (and Its Limits)

“Fair use” (U.S.) or “fair dealing” (UK/Canada) is a legal doctrine that allows limited use of copyrighted material without permission, under certain conditions. U.S. courts evaluate fair use using a four-factor test:

  1. Purpose and character of the use: Is it transformative? Is it commercial?

  2. Nature of the copyrighted work: Is it creative or factual?

  3. Amount and substantiality: How much of the original was used?

  4. Effect on the market: Does the new use harm the original’s market?

For AI training, these factors become murky:

  • The purpose (training a model) is arguably transformative but commercial.

  • The works used are typically creative (art, photos, illustrations).

  • Vast amounts of content are ingested often the entirety of a work.

  • The effect on the market can be substantial, especially when AI outputs compete with human artists.

Courts have not definitively ruled on whether training a model on copyrighted data constitutes fair use. The lack of case law is a legal black hole but that may soon change.

The Legal Battlegrounds: Ongoing and Landmark Cases

  1. Getty Images v. Stability AI (UK & US)
    Getty alleges that Stability AI trained its model on millions of copyrighted images without a license, including watermarked Getty stock photos, a strong case of unauthorized use.

  2. Andersen v. Stability AI (U.S. District Court, 2023)
    A group of artists (Sarah Andersen et al.) filed a class-action suit claiming copyright infringement, alleging Stable Diffusion was trained on their artwork without permission and that outputs can mimic their styles. The court dismissed some claims but allowed others to proceed.

  3. Zarya of the Dawn (U.S. Copyright Office, 2022)
    A comic book created with Midjourney illustrations was initially granted copyright, then partially revoked when the AI-generated images were found not to be human-authored. The written parts remained protected.

These cases illustrate the fault lines: training data acquisition, style mimicry, and authorship of AI-generated content.

Who Owns AI-Generated Art?

Under U.S. copyright law (and most international frameworks), only works created by humans are eligible for protection. This was reaffirmed in 2023 when the U.S. Copyright Office rejected copyright for works “autonomously generated by AI” without human input.

Some key takeaways:

  • Prompts alone typically do not count as authorship unless the input is complex and iterative enough to show creative control.

  • Fully AI-generated works (e.g., one-click prompts) are ineligible for copyright.

  • Collaborative human-AI work may be copyrightable but only the human contributions are protected.

This raises important economic and creative questions:

  • Who profits from AI-generated work?

  • Can AI art be protected from plagiarism?

  • What happens if a prompt mimics a known artist’s style?

The Gaping Holes

The current copyright system was not built for machine learning. Some of the most pressing issues include:

  • Training vs copying: Is using copyrighted work to train a model the same as copying it?

  • Derivative work: If an AI output closely resembles a copyrighted image, is that a derivative work?

  • Style protection: Copyright doesn’t protect style but if a model can mimic a specific artist’s aesthetic, should it?

And perhaps most importantly:

Should artists have the right to opt out or opt in, to having their work used for AI training?

Right now, they don’t. Most datasets were scraped from the web (e.g., LAION-5B), often including copyrighted works from ArtStation, DeviantArt, Behance, Flickr, and more; all without consent.

Where It’s Going

Several proposals and initiatives are emerging:

  • Licensing frameworks: Artists could license work to be used in training models (e.g., Spawning’s “Have I Been Trained?”).

  • Legislation: The U.S., EU, and UK are beginning to explore regulatory frameworks for AI copyright and transparency.

  • Dataset transparency laws: Requiring companies to disclose what content was used to train models.

But for now, the law remains reactive and creators remain in limbo.

Mimicry vs Plagiarism: Where’s the Line?

“Good artists copy. Great artists steal.” — Steve Jobs, channeling Picasso

For centuries, artists have learned by copying. In fact, in many artistic traditions from classical painting to jazz improvisation imitation is not only accepted but expected. It is how style propagates, how technique is internalized, and how creative evolution unfolds.

But there’s a difference between inspiration and appropriation, between influence and infringement and when AI is added to the mix, those boundaries get far more difficult to navigate.

Human Mimicry Is Everywhere

Artists copy other artists all the time — consciously or not. Consider:

  • Musicians practicing riffs from famous solos before writing their own.

  • Comic book artists modeling their panel layouts on the masters before them.

  • Fan artists creating works in the style of franchises they love.

This kind of mimicry is often seen as homage, tribute, or part of the learning process. As long as the new work transforms or recontextualizes the original, it’s rarely considered theft.

Copyright law reflects this flexibility: it protects expression, not style. This means artists can legally create works in the style of another without infringing, unless they reproduce specific copyrighted elements.

How AI Changes the Game

AI art generation tools can now create images that mimic specific artists’ styles on demand, often with chilling precision, even when the original artist never agreed to their work being included in the training set.

With a single prompt: e.g., “a fantasy knight in the style of Greg Rutkowski” a model can output images visually indistinguishable from Rutkowski’s distinct aesthetic. He, along with many other artists, has publicly opposed this practice.

This raises a key ethical and legal question:

If a machine is trained to replicate your artistic voice without your consent, is that still inspiration — or is it plagiarism at scale?

The Legal Status of “Style”

Here’s the legal crux: style is not copyrightable.

You cannot copyright:

  • An aesthetic

  • A brushstroke pattern

  • A musical genre

  • A color palette or “vibe”

This legal loophole makes it difficult for artists to claim infringement when their style, rather than specific content, is reproduced. In the context of AI, that means tools can legally generate images in the style of living artists, as long as the outputs don’t contain direct copies of protected works.

But the ethics? Much murkier.

The Machine Doesn’t "Forget"

One key difference between human mimicry and AI is scale and permanence:

  • A human artist may study one or two influences at a time.

  • An AI model “remembers” millions of images including works from artists who never gave permission.

  • And it can recreate visual styles not just broadly, but with surgical accuracy, upon demand, indefinitely.

While most models don’t store images in a literal sense, their ability to replicate output that closely resembles the source blurs the line between mimicry and reproduction. This becomes especially problematic when:

  • A model can recreate highly specific compositions or characters.

  • A prompt can generate something nearly identical to a copyrighted work.

  • A style is associated with a single living artist whose income depends on commissions or licensing.

The Psychological Tipping Point

To many artists, this doesn’t feel like homage. It feels like:

  • Being replaced by a synthetic version of yourself.

  • Losing control over your own visual identity.

  • Having your labor fed into a system that can now outproduce you, faster and cheaper.

And unlike human imitators, AI doesn’t bring personal growth, reinterpretation, or a new cultural layer. It doesn’t engage in a creative dialogue. It just… replicates.

“I didn’t give anyone permission to clone me.” — common sentiment among digital artists

So Where Is the Line?

Mimicry becomes plagiarism when:

  • There’s no transformation or commentary.

  • The original creator is not credited.

  • The intent is to deceive or replace.

  • The output harms the original’s market value.

In human terms, courts often look at whether something is a “substantial similarity” to a protected work. But with AI, the outputs are often unique and yet stylistically indistinguishable from an artist’s voice. It’s a kind of identity theft that isn’t covered by current copyright law.

Toward New Definitions

If we want ethical boundaries for AI art, we may need new terms and frameworks:

  • “Stylistic rights”: granting creators some control over how their aesthetic is used in commercial AI tools

  • “Derivative identity protections”: treating visual style as a form of intellectual identity, akin to a trademark

  • “Ethical mimicry standards”: requiring opt-in or attribution when using specific named styles in prompts

These are still just proposals. But the current framework where AI can legally imitate but not legally infringe leaves artists exposed, frustrated, and economically vulnerable.

Ethics: Exploitation, Consent, and Justice

“It’s not that I hate the tool. It’s that I never agreed to become the fuel.” — Anonymous digital artist

When the conversation around AI-generated art heats up, it’s rarely just about the images themselves. It’s about the feeling that something was taken. That behind every beautiful, machine-generated landscape or portrait lies a quiet trail of unpaid labour the fingerprints of thousands of artists who never said yes.

The Unseen Cost of Innovation

For the developers of generative models, the training process is framed as neutral: a technical challenge solved by vast datasets. But those datasets weren’t created in a lab, they were scraped from the internet. From blogs, portfolios, online galleries, private archives, stock photo libraries. Much of it belonged to individual artists, illustrators, photographers; many still alive, still working, still trying to make a living.

They didn't consent to this. In many cases, they didn’t even know.

And yet, their work now lives inside black-box systems that can instantly produce art in their style, art that can be used commercially, art that may be indistinguishable from their own, and art that they may never be paid for, credited for, or even notified about.

This isn’t just about legality. It’s about justice.

The Consent Problem

Informed consent is a basic principle in ethics, in medicine, in research, even in journalism. But in the rush to train generative models, it was ignored. Artists were not asked. Opt-outs didn’t exist. Some platforms (like ArtStation) even required artists to fight to opt out, often without knowing they’d been opted in by default.

This erasure of agency cuts deep, especially for independent creators. It’s one thing for an artist to be influenced by your work after seeing it in a gallery. It’s another for your entire catalog to be ingested into a system that can now replace you, without your knowledge or control.

Consent, in this context, isn’t just about a checkbox. It’s about dignity the right to determine how your voice is used in the world.

Power Asymmetry

There’s a deep imbalance at play.

The companies building and profiting from AI tools are often well-funded tech giants or startups backed by venture capital. The people whose work was used to train those systems? Often freelancers, gig workers, marginalized creators, and small business owners.

The system as it stands enables a transfer of cultural labour from the many to the few with little transparency, no compensation, and no recourse.

This is not unlike historical forms of creative extraction: from unpaid musical sampling in the early days of hip-hop, to the appropriation of Indigenous designs by Western fashion brands, to the silent labour of women artists uncredited in workshops and studios throughout history. What we’re seeing is not new it’s just automated now.

“It’s Just a Tool”

That’s the most common defense. AI is a tool, like Photoshop, like a camera. It doesn’t do anything by itself people do. And that’s true, to an extent.

But tools are not neutral. Every tool carries the values of its makers. And when a tool is built on unconsented labour, when it can be used to reproduce someone else’s style with a few words, when it obscures the origins of its output, it’s not just a tool anymore. It’s a system.

And in any system, ethics comes down to choices.

  • How was it built?

  • Who does it benefit?

  • Who does it harm?

  • What values does it reinforce?

The Question of Justice

What would justice look like here?

Some say AI should be trained only on public domain or licensed work. Others propose opt-in systems, artist royalties, or style-specific attribution. Some want regulation. Others want outright bans.

There’s no perfect answer but the current status quo is unsustainable. It erodes trust, devalues labour, and deepens the inequality already baked into the creative industries.

More importantly, it misses an opportunity. Because AI doesn’t have to exploit. It doesn’t have to replace. It could be collaborative, consensual, transparent if we choose to build it that way.

Toward Ethical Ground

Ethical AI art isn't about stopping progress, it's about steering it. About imagining a future where machines amplify creativity without silencing the creators who made them possible.

To get there, we need:

  • Consent-based data practices: Artists must be able to choose whether their work is included in training datasets.

  • Transparency in model development: We need to know what was used, how, and by whom.

  • Attribution standards: If a style is emulated, the origin should be disclosed, just as we cite sources in academic work.

  • Profit-sharing models: If your work helped train a commercial system, you deserve a slice of the value it generates.

  • Cultural humility: Recognizing that not all creative traditions can or should be “modeled.”

Above all, we need to recenter human agency in a conversation that too often treats creativity as raw material to be mined rather than a relationship to be honored.

“We don’t fear the machine because it’s creative. We fear it because it erases the people who were creative first.”

Possible Futures: Reconciliation, Regulation, and Responsibility

“The future is not a place we go. It’s a place we build.” — Ian McHarg

Now that the dust has settled, at least momentarily, the question is no longer whether AI belongs in art. It’s here. The question is: Can we build a version of AI that uplifts rather than exploits? A future where innovation and ethics aren’t at odds?

Let’s break that future down into three pillars: reconciliation, regulation, and responsibility.

Reconciliation: Finding a Middle Path Between Creators and Machines

The goal is not to undo AI, but to realign it with the values of creative communities.

Right now, many artists feel dispossessed not just economically, but culturally. Their contributions were folded into a machine that can now outproduce them. Reconciliation means creating systems where artists remain agents, not resources.

What this could look like:

  • Opt-in training datasets: Creators choose if and how their work is used.

  • Style licensing and attribution: Want to generate art in the style of a living artist? Get permission or at least acknowledge the source.

  • Artist-first platforms: Imagine AI art tools where users can select from licensed styles, with royalties flowing back to the original artist.

  • Collaboration models: AI isn’t the creator it’s the brush. Let artists use it to amplify their vision, not compete against it.

Some of this is already emerging. Platforms like Spawning.ai allow creators to opt out of training datasets. Adobe’s Firefly trains only on licensed or public domain content. These models aren’t perfect, but they’re a start, a proof of concept that ethics and capability can coexist.

Regulation: Building Legal Structures That Protect and Empower

The legal landscape is still catching up; but it must, and fast.

What’s needed isn’t a blanket ban on AI-generated work, but a nuanced legal framework that recognizes new forms of harm and new types of authorship.

Possible regulatory directions:

  • Training data disclosure laws: Require developers to list what datasets were used and allow artists to audit them.

  • Right of publicity or “style rights”: Grant creators partial control over commercial mimicry of their aesthetic identity.

  • Labeling and provenance standards: Make AI-generated content traceable. Users deserve to know what’s synthetic and where it came from.

  • Revenue-sharing legislation: If a commercial AI system was trained on your work, you should receive a share of the profits it enables.

Internationally, different models are being explored. The EU AI Act introduces risk-based regulation, including transparency mandates. Japan has controversially opened the door to training on copyrighted material without permission, citing innovation, a stark contrast to ongoing lawsuits in the U.S. and UK.

This regulatory moment is pivotal. We can either codify exploitation — or we can lay the foundation for creative rights in the machine age.

Responsibility: Cultural and Individual Choices

Even if the law changes, and platforms evolve, ethics ultimately live with us; the users, builders, commissioners, and audiences.

What kind of art do we want to create? What kind of systems do we want to support? And how do we treat the people whose voices, literal or visual, made this technology possible?

A culture of responsible AI use might include:

  • Giving credit even when it’s not legally required.

  • Avoiding prompt phrases that co-opt named artists without permission.

  • Supporting artists financially and publicly especially those marginalized or economically impacted by automation.

  • Using AI to explore new styles, not mimic existing ones.

  • Making transparency the norm, not an afterthought.

In short: a future where we use AI to expand creativity, not flatten it.

Not a Fork in the Road - A Field of Possibilities

It’s tempting to imagine the future of AI art as a binary: one path leads to dystopia, the other to utopia. But in reality, we’re in a much messier, more promising place — a wide-open field of ethical design choices, cultural norms, and legal frameworks waiting to be shaped.

If we get it right, AI could become a tool for accessibility, cross-cultural exchange, and creative empowerment. A paintbrush for those who can’t hold one. A collaborator for those with ideas but no training. A way to bring more voices into the room, not fewer.

But if we ignore the costs, if we sacrifice consent for convenience, if we continue to extract without accountability, then we risk repeating the very cycles of exploitation that art should be able to transcend.

“Technology is not destiny. It is direction.”

Let’s choose it wisely.

The Human in the Machine

“We shape our tools, and thereafter our tools shape us.” — John Culkin (popularized by Marshall McLuhan)

We began with a paradox: if both humans and machines learn by seeing and doing, why do we judge their creations so differently?

Now, after traversing the laws, the psychology, the mimicry and the ethics the answer is clearer, though no less complex. We don’t just judge AI art by its output. We judge it by what we believe art is for and who we believe it belongs to.

At its core, art is a human act. Not because it’s unique to us but because it reflects us: our fears, our joy, our chaos, our culture, our contradictions. We value art not just for how it looks, but for where it came from, why it was made, and what it says about the person who made it.

When a machine generates something stunning, something that makes us feel, it’s not invalid. But it’s unanchored. There is no pain behind the brushstroke, no story behind the song, no lived experience behind the line of code. That absence may not make the output meaningless, but it does make it different.

And it makes us wonder: If we cannot locate the creator, can we still trust the creation?

Not a Rejection of the Machine, But a Reclaiming of the Human

This is not a call to wall off AI from art. It’s a call to root it in respect. To resist the temptation to erase the human from the creative process — even when the machine makes that erasure easy.

It’s about making sure the systems we build honour the people whose work, culture, and imagination made them possible.

And it’s about remembering that tools do not create values, people do.

So we must ask:

  • What kind of creative future do we want?

  • One of scale, speed, and automation at any cost?

  • Or one of consent, care, and collaboration?

These are not technical questions. They are moral ones. Cultural ones. Human ones.

The Real Soul in the Machine

There’s a moment, when using AI tools, that feels almost magical. You type a phrase a strange dream, a vision from childhood, a surreal fusion of references and suddenly, it appears. Something new. Something beautiful. Something that might even move you.

But that moment doesn’t come from the machine. It comes from you. The impulse. The intention. The story behind the prompt.

The AI may assist, but the spark the thing that makes it art still comes from a human need to express, to see, to be seen.

That spark is not replaceable. It never was.

“We’re not afraid the machine will become human. We’re afraid it will convince us we no longer need to be.”

As we step into this new era of creativity, let’s not lose sight of that spark. Let’s build tools that amplify it, protect it, and keep it burning.

Because in the end, the most important question isn’t what the machine can make.

It’s what kind of humans we want to be when we use it.

Disclaimer:
This essay is intended for educational and commentary purposes only. It does not constitute legal advice, nor should it be interpreted as a definitive account of ongoing litigation or intellectual property law. The views expressed are those of the author and do not reflect the positions of any affiliated institutions or individuals cited. While every effort has been made to ensure accuracy and clarity, the ethical and legal dimensions of AI-generated content are rapidly evolving. Readers are encouraged to consult legal professionals or primary sources for specific guidance.

References

Adobe. (2023). Adobe Firefly: Generative AI built for creators. https://www.adobe.com/sensei/generative-ai/firefly.html

Barthes, R. (1967). The death of the author. Aspen, (5–6). https://writing.upenn.edu/~taransky/Barthes.pdf

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Bloom, H. (1973). The anxiety of influence: A theory of poetry. Oxford University Press.

Boden, M. A. (2004). The creative mind: Myths and mechanisms (2nd ed.). Routledge.

Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets.

European Parliament. (2023). Artificial Intelligence Act: Provisional agreement on the proposal for a Regulation.

Getty Images (US), Inc. v. Stability AI Inc., No. 1:23-cv-00135 (D. Del. 2023).

Japan Agency for Cultural Affairs. (2023). Policy on copyright and AI-generated content. https://www.bunka.go.jp/english/

Kant, I. (1998). Groundwork of the metaphysics of morals (M. Gregor, Trans.). Cambridge University Press. (Original work published 1785)

Kozbelt, A., Beghetto, R. A., & Runco, M. A. (2010). Theories of creativity. In J. C. Kaufman & R. J. Sternberg (Eds.), The Cambridge handbook of creativity (pp. 20–47). Cambridge University Press.

OpenAI. (2024). System card for GPT-4

OpenAI. (2024). System card for DALL·E 3

Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 92–96. https://doi.org/10.1080/10400419.2012.650092

Sandel, M. J. (2009). Justice: What’s the right thing to do? Farrar, Straus and Giroux.

Spawning AI. (2023). Have I Been Trained? https://haveibeentrained.com

Stability AI. (n.d). Documentation on Stable Diffusion and dataset use.

U.S. Copyright Office. (2023). Copyright registration guidance: Works containing material generated by artificial intelligence.

Zarya of the Dawn, U.S. Copyright Office (2022). Final determination letter on AI-generated visual art.

JC Pass

JC Pass is a specialist in social and political psychology who merges academic insight with cultural critique. With an MSc in Applied Social and Political Psychology and a BSc in Psychology, JC explores how power, identity, and influence shape everything from global politics to gaming culture. Their work spans political commentary, video game psychology, LGBTQIA+ allyship, and media analysis, all with a focus on how narratives, systems, and social forces affect real lives.

JC’s writing moves fluidly between the academic and the accessible, offering sharp, psychologically grounded takes on world leaders, fictional characters, player behaviour, and the mechanics of resilience in turbulent times. They also create resources for psychology students, making complex theory feel usable, relevant, and real.

https://SimplyPutPsych.co.uk/
Previous
Previous

Tariffs as 'By Proxy Soft Power': A Strategic Analysis of U.S. Economic Pressure on China and Its Ripple Effects on Russian Relations

Next
Next

Ethical AI, Fair Use, and the Future of Creative Labour