Measurement is in danger of becoming a dirty word
Opinion
Agencies find it easier to blame measurement geeks than admit their failures in trading transparency and inaction over ad fraud.
I didn’t go to The Media Leader’s ‘The Future of Media’ event — it seems inappropriate for an old guy from the full-service era to attend anything with ‘future’ in the title. As one delegate put it: “This is The Future of Media, not the future of 1980s ad agencies.”
But (and there’s always a but) it does no harm to learn from past mistakes.
And here’s one waiting to happen, as reported during a closed-door debate featuring senior industry leaders: “Ultimately, the room appeared to agree in principle on redefining the importance of measurement”.
What’s wrong with that? Surely measurement is key?
Indeed, it is. I started my career at an ITV station in research and measurement. I headed the media research function at an ad agency and sat on both technical sub-committees and management committees at various joint industry committees (JICs).
And I was responsible, with Dr Simon Broadbent in persuading the TGI to add what we called ‘lifestyle questions’, developed as a proprietary Leo Burnett study, to the industry survey where they sit to this day. If there was a Media Mastermind (actually there was such a thing once) my specialist subject would be failed audience measurement initiatives I have known.
Measurement is losing the blame game
So why am I saying the consensus that we need to ‘redefine the importance of measurement’ is wrong?
Because it’s passing the buck. It’s easier and far more politically acceptable in agencies to blame those measurement geeks than it is to talk about failures in transparency in trading, acceptance of fraud, and the end result — a lack of trust.
Instead let’s talk about measurement in the certainty that we’re deflecting, and kicking the can down the road.
Years ago, the media agencies decided TGI was both essential and far too expensive. They decided to group together to produce their own superior version. That went precisely nowhere (with apologies to Jim Kite at UM who did actually do something about it).
Then the agencies decided to group together and do something through the IPA. Luckily for them, the IPA Head of Research back then was the brilliant Lynne Robinson who came up with Touchpoints.
This it was agreed was a great idea and to keep it untouched by vendor biases it would be funded by the agencies. That did happen for (I think) one cycle. Touchpoints is still a great idea but it’s no longer the agencies’ great idea.
Not part of the sales story either
On the sell side, what about the sterling efforts of the media marketing agencies — Thinkbox, Radiocentre, Magnetic, Newsworks?
They’ve done excellent work over the years — but how much of their output and findings have ever translated into how media is traded?
Or, on the advertiser side, ISBA’s Origin – an initiative that moves the ball forward both in what it’s trying to do and in how it’s funded.
Is the media industry behind it, supporting their clients? Hardly.
The broadcast vendors seem to want nothing to do with it and the agencies (ironically) moan on about how long it’s taking.
Giving researchers a hospital pass
In about three weeks I’ll be at the annual ASI audience measurement conference, widely recognised as the best of its type. The future of measurement will, as always, be very much front and centre.
How many agencies and advertisers will attend to contribute to the debate? Very, very few.
“After Cannes we have no budget left” is the typical agency response.
Which is more important? Hobnobbing with clients or contributing to improving measurement? The answer, of course, is both — yet one seems more important to the guys running agencies than the other.
Finally, what about effectiveness measures? Nick Manning has argued convincingly that that’s where the future lies, and he’s right. But who’s going to drive such initiatives? Surely the agencies should take the lead?
But if you’re not trusted to trade transparently and if you’ve followed your holding company’s lead and managed to look the other way when fraud is discussed, then why on earth would any client trust your in-house operation with any unbiased measurement of effect?
It’s unusual for an in-house unit to recommend a change in plan because what the company did last time didn’t work as we hoped?
There are ways for agencies to lead their clients towards effectiveness measures, and to take real, actionable initiatives that help to regain trust (essential before anything else). But I’m afraid showing up to an event and throwing a hospital pass to under-resourced research guys isn’t the answer.
Especially if the kernel at the heart of your plan is to change the description from ‘measurement’ to ‘impact’.
Brian Jacobs is founder of Crater Lake & Co BJ&A. He has spent over 35 years in advertising, media and research agencies including Leo Burnett, Carat and UM.