AI News

Automatically collected by AI

When AI Becomes Persuasion

A flood of synthetic persuasion is reshaping the internet

The latest wave of artificial intelligence content spreading across social media has taken two very different forms: flirtatious “relationship” podcasts dispensing advice about how women should please men, and taunting, Lego-style wartime memes ridiculing President Trump and celebrating Iran. But taken together, they point to the same development: cheap synthetic media is becoming a tool for shaping belief at scale, blurring the line between entertainment, influence and propaganda.

In one corner of the internet, AI-generated hosts with polished voices and studio-ready backdrops have amassed millions of views by delivering familiar prescriptions about femininity, submission and how to “keep a man happy.” In another, pro-Iran accounts have churned out viral animated clips and fabricated battle imagery designed to mock the United States and dramatize the conflict in ways that are difficult to verify and easy to share.

What links them is not simply that they were made with AI. It is that they are engineered for the social platforms where attention is won through speed, emotional punch and the appearance of authenticity.

The new influencer economy, without influencers

The fake relationship-guru videos present themselves as a form of intimate advice, often borrowing the visual grammar of podcasts: close-up microphones, serious conversation, confident hosts and the polished cadence of personal expertise. But the “people” onscreen are not always real. They are AI-generated personas, built to seem authoritative and relatable while delivering blunt, highly gendered messages.

The appeal is not only ideological but commercial. Some of the accounts driving these videos also funnel viewers toward courses and services promising to teach them how to create AI influencers of their own, turning synthetic identity into a business model. In that sense, the videos are both product and advertisement, using provocative cultural messaging to attract traffic and then monetizing the machinery behind it.

Researchers have long warned that generative AI systems do not merely invent new content out of thin air; they often reproduce and intensify patterns already embedded in the data on which they were trained. That includes racial, cultural and gender bias. The result is a feedback loop in which old stereotypes return in a new format — smoother, cheaper and more scalable than traditional influencer content.

What once required a charismatic creator, recording equipment and a loyal audience can now be simulated in bulk. A single operator can produce endless clips, each tailored to trigger argument, affirmation or outrage.

War by meme

The same dynamics are increasingly visible in geopolitics.

Since the outbreak of the Iran conflict, social platforms have been inundated with misleading, recycled and AI-generated visuals. Among the most visible examples are pro-Iran “Lego” cartoons and other synthetic videos that depict Trump and the American military as buffoonish, weak or humiliated. The clips are designed less to inform than to land a punchline — and in a crowded information war, ridicule can travel faster than fact.

Some of this material has spread far beyond fringe accounts. Reports have documented pro-Iran networks pushing the content across major platforms, with some posts amplified by Iranian state media. Alongside the cartoons, fabricated or miscaptioned footage has circulated widely, including videos falsely presented as current scenes from the battlefield.

The effect is a digital fog of war in which viewers struggle to distinguish documentary evidence from visual theater. That confusion is itself useful. When trustworthy sourcing becomes difficult, audiences may gravitate toward the version of events that feels most emotionally satisfying, whether that means patriotic, vengeful or simply entertaining.

The recent conflict has shown how readily AI can be folded into wartime narrative-building. Synthetic clips can fill gaps where no real footage exists, embellish victories, invent attacks or caricature enemies with little cost and almost no delay. They can be optimized for the meme economy rather than the nightly news.

Why this matters now

This moment is not defined simply by the existence of fake content. Fabricated images, propaganda and manipulative messaging all predate generative AI. What has changed is the speed, volume and polish with which they can now be produced.

AI tools allow operators to create avatars, clone voices, animate scenes and imitate amateur authenticity on demand. The costs are low, the barriers to entry have dropped and the output can be tailored to whatever performs best on any given platform. A creator no longer needs a camera crew, a battlefield correspondent or even a human face. They need only prompts, software and a sense of what will travel.

That shift matters because the internet’s persuasion systems increasingly reward what synthetic media does well: emotionally charged, visually sticky, highly repeatable content. Whether the goal is selling an AI-influencer course, reinforcing sexist social scripts or shaping public perceptions during a war, the mechanics are strikingly similar.

There is still uncertainty about how much these campaigns truly persuade, as opposed to merely attracting clicks and shares. A viral video may be more successful at harvesting engagement than changing minds. But in networked media environments, attention is not separate from influence. Repetition can normalize a message even when viewers encounter it as irony, entertainment or background noise.

The limits of labeling

Technology companies and policymakers have promoted ideas such as watermarking, labeling and stronger moderation as possible answers to the rise of synthetic media. Those measures may help at the margins. But the recent proliferation of AI propaganda and AI influencers suggests that disclosure alone may not solve the problem.

Many users are not sharing this material because they have carefully verified its authenticity. They are sharing it because it is funny, provocative, enraging or affirming. In other words, the exact qualities that make it hard to contain are also what make it profitable and politically useful.

That leaves platforms confronting a more difficult challenge than simple detection. They are dealing with a media environment in which fabricated content can succeed precisely because it does not behave like a traditional fake. It behaves like culture.

The spread of synthetic relationship coaches and wartime meme propaganda may look like two separate curiosities from opposite ends of the internet. In fact, they are part of the same story: a rapidly expanding ecosystem where AI-generated personalities and images are being used to sell, persuade, flatter and manipulate, often all at once.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *