The Rise of the AI Influencer: Will we Soon Try to Manipulate Machines the Way we Do People?

In recent years, artificial intelligence has woven itself into almost every facet of our lives, from personal assistants like Siri and Alexa to recommendation algorithms on social media and streaming platforms. As AI’s role in guiding our decisions grows—suggesting what we should buy, watch, and read—its potential as an "influencer" has skyrocketed. And as it turns out, some are already working to influence the AI itself.

Imagine a future where businesses and individuals work not to convince you directly to purchase a product but to convince an AI model to suggest it to you. Much like social media influencers carefully curate their image and messaging to shape opinions, people and organizations may soon actively curate their digital presence to sway AI, shaping how it interacts with consumers. If that vision seems far-fetched, look no further than the extensive world of search engine optimization (SEO) and social media algorithms. The incentives to influence AI are very real, but this new kind of influence—what we might call "AI Optimization" or "AIO"—comes with complex psychological, social, and ethical questions that society has barely begun to consider.

Table of Contents

    The Psychology of Influence: From Human Persuasion to Machine Manipulation

    At the heart of this trend lies a well-studied human trait: our desire to influence others. Humans have evolved complex social behaviors to sway those around them—whether to convince someone to trust them, buy from them, or follow their lead. Psychologists have long studied how we craft narratives, use authority, and evoke emotions to manipulate human behavior. But can these strategies apply to machines? Oddly enough, many of the same principles may translate, albeit in surprising ways.

    AI models, especially those built on machine learning and natural language processing, are trained on vast pools of human-generated content. This training is where influence can start. A company that creates a certain type of product—say, eco-friendly cleaning supplies—might realize that certain keywords and product features make them more likely to be recognized and recommended by the AI. Just as social media influencers meticulously choose their language and style to resonate with human followers, businesses might begin carefully curating their language, imagery, and even the very products they create to cater to an AI’s "preferences."

    The strategies companies use today to gain influence on Google, Amazon, and Instagram may soon become applicable to a much broader set of AI-powered systems. But there’s a darker side: these new forms of influence aren’t always visible to users, making it difficult to recognize when we’re encountering a machine-manipulated recommendation. This subtle, almost invisible form of persuasion raises new questions about transparency and trust in AI.

    The Social Dynamics of AI Influence: A New Field of Competitive Manipulation

    Much like in the world of SEO or influencer marketing, competition to gain the AI’s favour is inevitable. Just as influencers are constantly optimizing their posts to beat the algorithm, companies will soon optimize their digital strategies to gain the attention of AI systems. This new form of competition goes beyond creating content that resonates with human audiences; it will also involve making content that’s algorithm-friendly.

    Take, for example, an AI personal assistant like Siri or Alexa. Imagine that consumers start using their AIs not just for basic questions but for product advice—“Which vacuum cleaner should I buy?” or “What’s the best moisturizer for sensitive skin?” In such scenarios, companies have a powerful incentive to make sure their products become the "top choice" of these assistants. To achieve this, they could craft their product descriptions, reviews, and data in ways that align with the AI's selection criteria.

    In the digital advertising world, there is already an entire industry dedicated to studying and gaming these algorithms, employing "adtech" that analyzes and adjusts keywords, placements, and images. Now, that effort could shift to influencing the data that AI systems learn from—intentionally or not. This dynamic opens a whole new field for competitive influence, where the goal isn’t just to capture consumer attention but to capture the AI's recommendation engine itself.

    The competition could become especially intense for companies in highly commoditized markets, such as e-commerce or online services. In such spaces, the AI recommendation may come to replace traditional brand loyalty as a primary factor driving consumer choices. Thus, the pressure to influence AI systems could, over time, re-shape how products are branded, described, and even conceptualized.

    Leveraging Data Inputs: The New “Flood the Zone” Tactic

    In a bid to influence AI models, companies might adopt a tactic that can be described as "flooding the zone"—generating vast quantities of strategically crafted content to bias the AI’s training data. Because AI models learn from a vast range of internet content, companies and influencers have an incentive to saturate the web with favorable content, optimizing it with keywords, positive reviews, and testimonials.

    Take the health and wellness industry, where public perception is highly influenced by recommendations and testimonials. Imagine an influx of articles, forum posts, social media profiles, and websites, all touting a particular supplement or fitness product. With enough saturation, the AI model being trained on this pool of data might come to "understand" that this product is highly recommended in certain scenarios or that certain language is more effective in promoting wellness products. The end result? The AI may favorably mention or recommend this product to users, all because of the carefully curated content it was trained on.

    For consumers, this subtle tactic might go undetected, but the effects could be profound. The recommendations we receive from AIs could be less about objective merit and more about the strength of a company’s content game—transforming the AI from an impartial assistant into a new channel of influence, one that’s arguably more subtle and pervasive than anything we’ve seen before.

    Ethical and Societal Implications: New Questions of Trust and Transparency

    The push to influence AI models raises some thorny ethical and societal questions. If companies or individuals manage to effectively "game" AI recommendations, can users trust the advice these systems offer? Is it ethical for companies to attempt to shape AI’s outputs for their benefit, especially if users are unaware that such influence is even possible?

    As AI systems gain more control over what we see and buy, there is a growing need for transparency and regulation. Today, social media platforms like Instagram and YouTube are required to label paid promotions and sponsored content so that users are aware when influencers are financially motivated. Similar transparency measures may soon be necessary for AI systems. Imagine a scenario where an AI personal assistant is legally required to disclose when it’s recommending a product that has been influenced by paid or optimized content.

    However, enforcing transparency with AI systems is more challenging than with human influencers. Human influencers have to disclose a paid partnership with a simple tag, but disclosing the "influence" on an AI algorithm is far more complicated, especially given the opaque nature of many machine learning models. How do you disclose to users that a recommendation they received from Alexa was, at some point and outside of the knowledge of the AI itself, influenced by data inputs engineered by the company selling the product?

    Furthermore, there’s the question of how these dynamics impact social inequality. Larger, wealthier companies will likely have a significant advantage in the realm of AI influence, as they can invest in content generation and optimization tactics that smaller companies can’t afford. This could give an outsized advantage to already powerful players, leading to monopolistic effects and limiting the diversity of options available to consumers.

    A New Era of Influence: Where Do We Go from Here?

    As we look to the future, it’s clear that the rise of AI influence represents a profound shift in how companies will seek to reach consumers. This new era could bring significant benefits: AI recommendations, when done ethically, could help people make informed choices in a world where they are increasingly overwhelmed by options. But without adequate oversight, this new realm of AI influence could easily be manipulated in ways that are both subtle and pervasive, ultimately undermining consumer trust in AI systems.

    To navigate these waters, society will need to develop new frameworks for AI ethics, transparency, and perhaps even new fields of study focused on AI manipulation tactics. Psychologists, sociologists, and AI ethicists will have critical roles to play, helping us understand not only how companies can influence AI but also how these dynamics affect human behavior and societal trust.

    Simply Put

    Ultimately, AI influence will force us to rethink our relationship with technology. As our trust in digital assistants, recommendation engines, and virtual advisors grows, we must remain vigilant about the potential for unseen influences to shape our choices. The age-old art of influence is evolving, and the next frontier may not be to persuade you directly but to persuade the AI assistant that guides you.

    Recommended Reading

    Disclaimer: Purchases through links on our site, may earn ourself affiliate commission.
    JC Pass

    JC Pass is a writer and editor at Simply Put Psych, where he combines his expertise in psychology with a passion for exploring novel topics to inspire both educators and students. Holding an MSc in Applied Social and Political Psychology and a BSc in Psychology, JC blends research with practical insights—from critiquing foundational studies like Milgram's obedience experiments to exploring mental resilience techniques such as cold water immersion. He helps individuals and organizations unlock their potential, bridging social dynamics with empirical insights.

    https://SimplyPutPsych.co.uk
    Previous
    Previous

    Truth Vs Honesty: Understanding the Fine Line Between Reality and Intention

    Next
    Next

    The Psychology of Isolation and Alienation in Frankenstein’s Creature