AI content will dominate the internet. What advertisers need to do now
Opinion
Slop is becoming harder to categorise as new AI technologies produce entirely new creative formats that don’t fit neatly into existing buckets. What should advertisers do to prepare?
“Slop” was Merriam-Webster’s Word of 2025, and it’s hard to argue with the choice.
AI tools from companies like OpenAI and Google hold enormous creative potential, but some actors are exploiting them to generate low-quality text, images, and video at an unprecedented scale.
In this sense, “slop” is a fitting term.
It captures the reality facing advertisers and consumers as they navigate what a recent piece for The Media Leader calls the “sloppification” of the internet—an ecosystem flooded with AI-generated content of wildly uneven quality.
It’s harder than ever to know what’s real, credible, or valuable – and this challenge won’t just intensify in the year ahead. It will grow more complicated.
Attention advertisers: You have the power to stop the AI slop surge
Slop isn’t one-size-fits-all
Merriam-Webster defines slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” That definition tracks, but “low quality” is increasingly subjective. Is it about creative execution? Subject matter? Authenticity? More often, slop has become a know-it-when-you-see-it phenomenon.
That ambiguity won’t last. In 2026, slop will become harder to categorise as new AI technologies produce entirely new creative formats that don’t fit neatly into existing buckets.
At the same time, some creators are intentionally embracing “AI slop” as an aesthetic, producing tongue-in-cheek, surreal, or uncanny content precisely because audiences engage with it.
This creates a real tension for advertisers. What one brand sees as off-brand or low-value may be exactly the environment another brand wants to tap into.
Importantly, this is distinct from made-for-advertising (MFA) content, where the content exists primarily as a vehicle for monetisation. With some forms of AI slop, the audience experience is the point.
What advertisers should do
That’s why advertisers need transparency and control. Suitability isn’t absolute; it’s preference-driven. Brands will need to approach slop the same way they approach suitability overall: define what qualifies as slop for their brand, apply those definitions consistently across platforms, and decide whether to avoid it – or lean into it – based on their goals.
Begin by auditing a sample of your current placements to identify where AI content already appears, then use those examples to build your brand-specific definition of what’s acceptable versus what’s not.
The rising risk of hyper-realistic AI
Interestingly, while AI slop is a growing challenge, high-quality AI content may pose an even bigger reputational threat if used irresponsibly.
As AI capabilities accelerate, the line between “slop” and “quality” is blurring and, in some cases, disappearing entirely.
Tools like Sora and Nano Banana can now generate cinematic, hyper-realistic visuals and sophisticated narratives that were impossible – or prohibitively expensive – just a year ago. As these tools improve, high-quality AI output can become far more dangerous than low-effort slop when deployed in harmful or deceptive ways.
Consider an ad appearing next to AI-generated “camera footage” depicting violent behaviour, fabricated news reports about political events or deaths, or synthetic videos of public figures saying (horrible) things they never said. When AI content becomes indistinguishable from reality, the stakes rise dramatically. Transparency is no longer a nice-to-have. It becomes essential to protect brand equity, consumer trust, and the broader information ecosystem.
What advertisers should do
In this environment, advertisers benefit from clear, reliable signals to distinguish creative experimentation from content designed to mislead or manipulate, and from the ability to avoid the latter.
One option: push your media partners and platforms to provide AI content flags or labelling, even if imperfect, so you can make more informed placement decisions rather than operate in the dark.
AI content will outpace human content
Historically, human-created content has been more expensive and slower to scale than automated content, but it was generally considered higher quality.
AI has changed that dynamic. The quality of AI output is improving rapidly, even as the tools themselves become faster and cheaper. As a result, AI will soon represent a meaningful share of what people watch, read, and engage with. I also believe and predict it will become the dominant form of content online.
We’re already seeing this shift happen. AI-generated articles and human-written articles on the open web are now roughly equal in volume. That equilibrium won’t last. As generative tools improve and adoption accelerates, AI text will outpace human text within a year, with visuals likely following soon after.
What advertisers should do
This imbalance means advertisers cannot simply “block all AI content” as a long-term strategy, nor should they. The real question is no longer whether content is AI- or human-generated. It’s whether that content is suitable and aligned with a brand’s values and goals, regardless of how it was created.
Build tiered suitability guidelines that account for context—AI content might align with your brand in gaming or entertainment environments, but pose real risk in other categories.
Where we’re headed
This is a scary moment, sure, but it shouldn’t be paralysing. Content creation is changing, and advertisers need to be prepared for slop and other forms of AI-generated content.
The most prepared advertisers will recognise the need to adapt and to maintain control as AI reshapes the media landscape.
This shift isn’t stopping, and the goal shouldn’t be to stop it. The goal is to understand it, classify it, and ensure it works for advertisers, not against them.
Nisim Tal is CTO at DoubleVerify
