The RAB should feel proud to have made a strong case for radio’s role in media effectiveness, says David Brennan, however, as an industry, we’re going to have to rethink our approach to measuring effectiveness – and fast.
Another week, another effectiveness study. Two weeks ago it was radio’s turn to tell us it is one of the most effective pound-for-pound media channels in terms of delivering ROI.
The study – ‘The ROI Multiplier‘ – shows strong, consistent returns across a large number of campaigns for most media channels – but especially radio – and provides further insight via Radio Gauge to identify ways advertisers can optimise their returns (the main one seems to be ‘invest 20% of your budgets in radio’).
Don’t get me wrong; I think the RAB has commissioned an impressive piece of work, based on plenty of cases, millions of data points, the support of the major media agency groups and analysis conducted by the reputable Holmes & Cook.
Not only that, but it emerges with the dream headline that radio provides a higher ROI than all other media channels apart from TV, delivering returns of almost eight times the media investment.
Of course, that is part of the problem; the results don’t tally with the multitude of other effectiveness studies out there, and we are left with that residual sense of doubt (can it be true?) and uncertainty (what does it all mean?).
Am I right to feel so cynical? After all, I’m guilty myself, having commissioned a number of effectiveness studies when I was at Thinkbox, and I still feel their findings are valid and the insights they produced are genuinely helpful to media planners.
As these studies become more common – and, arguably, more diverse in their findings – we get further away from a ‘definitive’ view of how effectiveness actually works.”
They were popular with advertisers and agencies seeking to justify their media investments and, in my humble opinion, produced a fair amount of ROI themselves. No wonder there have been so many rival studies hitting the headlines ever since.
The reason for the emergence of large-scale effectiveness studies like these is down to the rapid advances made in statistical analysis and availability of data over the past couple of decades. This is smart data at its finest; focusing on the most important metrics (sales, profit, investments, competitive activity, and so on) and applying rigorous statistical analysis to exploring the relationships between them.
It is now not uncommon to be presented with effectiveness studies covering hundreds, if not thousands, of campaigns across a range of market sectors. Plus, of course, they fit with the zeitgeist; anything that can measure and optimise ‘value’ has to be valued itself in these procurement-driven times.
So, the RAB should feel proud to have made a strong case for radio’s role in the effectiveness mix. Like all of the ‘traditional’ media, so easily written off just a few years ago, radio appears to punch above its weight when the powerful forces of multiple linear regression analysis evaluate its performance in the cold light of data.
But…
As an industry, I think we are going to have to rethink our approach to measuring media effectiveness soon; as these studies become more common – and, arguably, more diverse in their findings – we get further away from a ‘definitive’ view of how effectiveness actually works.
For example, the Radio Centre’s ‘ROI Multiplier’ data is biased towards radio campaigns, which featured in 464 of the media campaigns analysed. The equivalent figures for TV (122), press (122), outdoor (41) and online (12!) were much lower. Already, issues of selectivity and representativeness are raised, although to be fair to RAB, all effectiveness studies are, by definition, based on self-selecting samples.
The sales ROI figures quoted in the RAB study are generally much higher than most equivalent studies report – even if we set aside the strong skew towards retail campaigns, which tend to produce higher than average advertising ROI figures. They are much higher on average than those quoted in the IPA databank of effectiveness awards entries, for example.
The more effectiveness studies that are published by individual media channels, the less traction they will create within the industry overall.”
But perhaps most significantly, the results show a very different pattern to most other recent studies, which immediately raises the question of ‘who to believe?’ I reckon the common answer amongst marketers and media agencies will be ‘whoever validates the decision I was going to take in the first place!’ Not that such an outcome is a bad thing – I believe most media insight is based on post-rationalisation – but in terms of changing hearts and minds, such analyses have a natural limit.
There is another issue with effectiveness studies; to those of us without a degree in statistical analysis they are just a little bit too ‘black box’. Sure, they produce numbers which are reassuringly precise and consistent, and more often than not tend to reflect how the market works.
They are based on increasingly high quality data sets (no more of those 8-brand online effectiveness studies these days, thank goodness) and are overseen by experts who can produce highly credible findings. But we don’t really understand how they are produced and, more importantly, they rarely tell us why effectiveness is just so damn elusive.
The more effectiveness studies that are published by individual media channels, the less traction they will create within the industry overall. That prediction is based on the following equation; Different metrics + different conclusions = apathy (confusion).
Wouldn’t it be great if all the interested parties could get together and create a combined effectiveness study that aimed to not only offer definitive (and independent) proof of different media channels’ contribution to payback, but also provided valuable insights into how that payback could be optimised?
Of course! We’ve already been there, in a sense. In 2007, Les Binet & Peter Field provided us with their impressive analysis of almost 30 years’ worth of IPA Advertising Effectiveness Award entries, published as ‘Marketing in the Era of Accountability’, possibly the king of effectiveness studies.
It provided a feast of knowledge and more than a few challenges to traditional marketing practice. But the IPA database is based on the ‘best of the best’; effectiveness studies such as RAB’s ‘ROI Multiplier’ have the advantage of looking at a range of brands and campaigns, with a more consistent dataset.
The meta-meta-analyst in me wants to take these thousands upon thousands of campaigns that have been analysed in this way, reflecting all media channels equally and fairly, and see what happens when the data shakes out. Two things I predict;
1. Most of the media channels written off over the last decade or so will show positive returns on investment – indeed, such a study would boost advertiser confidence and investment in general.
2. It’ll probably never happen – I doubt a single media channel could fund something so ambitious, and there are few signs of media working together for the good of the media industry in general
In the meantime, let’s think about what this plethora of effectiveness studies tells us about the big issues that face the way media works and how advertising payback can be optimised. Then, when we realise they can only take us so far, let’s think of a new way of doing them, so that the industry benefits from an equitable, comprehensive, long-term and wide-ranging analysis into what really drives advertising effectiveness.