The future of agencies: the hidden cost of designing for AI
Opinion
AI is reshaping how brands are discovered, ranked and recommended. If agencies design for algorithms before audiences, they risk diminishing the very influence they claim to sell.
“Why do you need to learn things? You can just Google the answer.”
That was my five-year-old, mid–half-term meltdown. The runner-up line — “Yeah, but what did Prince Andrew actually do?” — required emergency parental censorship powers.
Parenting, I’ve discovered, is hypocrisy in the name of civilisation. At work, I defend free expression, but at home, I run a benevolent dictatorship.
But on Google, she’s not wrong…
Why memorise anything? Why struggle through ambiguity? Why develop judgment when a machine can surface the answer in milliseconds?
That casual and dangerous shift — a world in which intelligence is increasingly mediated through machines — is laid bare within WPP’s Advertising Intelligence Framework report.
WPP’s AI measurement logic
The document lays out five pillars that will determine which companies “own” intelligence by 2030: Data Assets, AI capability, Distribution, Transaction infrastructure and Content integration.
It calmly informs us that “the future of advertising will increasingly involve not just human consumers making purchase decisions, but AI agents and bots acting on behalf of users to research, compare, and transact.”
WPP then asks the obvious question: “When AI bots make purchase decisions for consumers, will they recommend our products or services?”
Mass media and brand-building are acknowledged as still important… politely (almost, I sensed, nostalgically…) But the gravitational pull is unmistakable. Big Tech logic — the view that power comes from data scale, algorithmic optimisation and transaction control — has been absorbed into the Big Agency worldview.
In Seeing Like a State, political scientist James C. Scott argues that institutions exert power by making society “legible”: simplifying messy human reality into metrics that can be managed. What can be quantified can be governed; what cannot be measured quietly stops mattering.
And look at what this framework measures:
* Data depth.
* Recommendation engine performance.
* Distribution control.
* Closed-loop transactions.
Notice what it does not measure:
* Cultural resonance.
* Original thought.
* The awkward, inefficient act of changing someone’s mind.
Who wins?
Alphabet, obviously. If agencies begin optimising for “structured feeds and conversational retrievability”, guess who controls Google search ranking, YouTube recommendation and Android distribution?
Then there’s Amazon, which benefits if agencies prioritise questionable metrics like ‘ROAS’, which rewards short-term conversion over long-term brand equity.
Meanwhile, Meta benefits if creative is engineered primarily for feed dynamics. When recommendation engines are central to power scoring, agency briefs are going to be optimised for this and little else.
And, of course, OpenAI and Microsoft benefit if conversational AI becomes the discovery interface. We’ve seen this with Google and SEO: whoever controls the dialogue layer controls inclusion. This bleeds into content, too: I lost count as a trainee news reporter how many times I was told to “write headlines for Google, not for readers”.
Who loses?
Independent publishers, the ones who haven’t blown up in the minefield of Google rankings, Facebook news feeds, and now AI plagiarism, are now faced with this new challenge of being visible to AI. This is already happening and being given a non-threatening name: “zero-click”.
Broadcasters and premium video publishers will struggle if the industry logic becomes all about privileging machine-readable signals over long-term brand priming.
Smaller brands, without the kind of structured data depth that a Unilever or Mars can build, will risk no longer being visible to AI systems.
And agencies? This is where WPP’s report really gets dangerous.
Intelligence agnostics
If the future of advertising holding groups is that they become ‘optimisation partners’ for AI systems, there is a real cost.
The agency shifts from shaping desire to feeding the systems that rank it.
By consolidating creative brands while elevating data, AI and commerce narratives, the message is clear: machine optimisation is strategically equivalent to human persuasion. Influence in a mind, influence in a model — take your pick.
Let me explain this like I would explain to my five-year-old.
You have a robot helper who does your shopping for you.
The robot looks at lots of cereal boxes and has to pick one. It doesn’t know which one tastes nicest and it doesn’t have feelings; it just looks at signals.
Now imagine you’ve seen one cereal brand on TV loads of times. You’ve heard your parents talk about it. You recognise the box straight away.
When the robot is ‘deciding’, it might think: “Oh, this is the one the human already knows. This is probably safe.”
This is “brand salience”: how familiar and noticeable a brand is in your mind.
But here’s the catch: if the robot only chooses from what it can recognise or model, then persuasion is reduced to probability. Large language models’ output comes from the most statistically likely sequence based on patterns found in vast training data. Not judgment, taste, or emotion.
In this brave new world, agencies can claim brand-building still ‘matters’, but only as an input into a probability ranking system.
That is not the same thing as changing a person’s mind.
Are you advertising to readers or librarians?
So the risk isn’t that AI replaces human intelligence. It’s that the industry forgets the difference.
I tell my daughter that the internet is basically a huge library: you can find amazing content like Omar Oakes’ The Media Leader column, or you can find a box of rusty blades. But the shelves are not curated by Katie, the nice librarian lady; it’s a small group of gargantuan companies optimising for engagement and ad revenue.
We used to talk about the need for ‘media literacy’ to spot misinformation and bias. But the challenge today is even greater: recognising that our choices are filtered by systems designed to maximise interaction, not understanding.
The mistake my daughter makes — that ‘knowledge lives in Google’ — is just like our industry believing that persuasion lives in an AI model.
If we design primarily for the librarian, rather than the reader, we shouldn’t be surprised when the library starts to look the same everywhere.
Being really good at predicting isn’t the same thing as persuading.
Omar Oakes was the founding editor of The Media Leader and continues to write a column as a freelance journalist and communications consultant for advertising and media companies. He has reported on advertising and media for 10 years.
