Why do we need BARB? Sigh!
A series of blogs about the broadcast industry, narrated by David Brennan…
I was fortunate enough to attend MediaTel Group’s Media Playground 2012 event last week and enjoyed the lively debates, especially around the new screen opportunities and the value of data.
Unfortunately, I arrived late and so the Screen panel session was already in full swing – and as the room was packed full of delegates, I had to sidle my way to one of the few vacant seats right on the front row.
That wouldn’t have been too much of a problem except, not long after I sat down, the debate shifted to the perennial topic of why do we need BARB? Apparently I sighed very audibly (thanks for pointing that out, Rhys!), which prompted a ripple of laughter.
As it happens, I was sighing because I’d just realised I’d left my mobile phone at home but my opinions are apparently so well known that it was easily misconstrued, which is fair enough because if I hadn’t been so pissed off about my iPhone I am sure I would have sighed, if not wept tears of frustration.
I’ve heard so much recently about BARB’s irrelevance to the digital media landscape of today that I feel I ought to add my voice to the case for its defence.
1. BARB is an accepted currency. It is rare that we get the advertisers, agencies and media owners all in agreement, but the structure of BARB is such that they all have a stake in its development and implementation. A £3 billion market needs a recognised currency, which is why the online industry is doing its best to replicate BARB via UKOM.
2. BARB stands alone. One of the frustrations of online research and analytics is the plethora of data sources, meaning buyers and sellers can pick and mix the data that most suits them. It creates confusion, contention and conflict, rarely to the satisfaction of either party.
3. BARB is constantly reviewed and quality controlled, so that the recruitment, measurement and analysis of the data is all conducted to the highest standards and the accuracy and consistency of the data is optimised. Having sat on more BARB committees and working parties than I care to remember (the Rim Weighting Working Party still gives me nightmares, 20 years on!) I can vouch for the huge amount of work that goes into ensuring the quality of the outputs.
4. BARB is highly representative of the whole of the UK population, not just the online population – or worse, the tiny percentage of the population that decide to take part in online surveys.
5. BARB measures people, not clicks. As such, it enables us to understand the profile of an audience, measure reach and frequency of campaigns and track individuals’ viewing over time; all hugely important for a display medium like television.
6. BARB measures behaviour, not attitudes or estimates. The peoplemeter methodology means respondents aren’t asked to recall their viewing or to record attitudes or perceptions; both of which are subject to inconsistencies and mistakes. It merely asks them to press a button whenever they enter or leave a room when the TV set is on (and BARB coincidental surveys indicate they do that accurately).
7. BARB stands up to rigorous comparison with other respected data sources. For example, the IPA TouchPoints survey regularly shows a 99%+ correlation with the comparable BARB data, despite being measured via a different methodology.
8. BARB is fit for purpose. Although it has been criticised for being slow to measure new forms of viewing, such as on demand, and cannot be deemed reliable in its measurement of individual programmes on the smallest channels, it measures the bulk of TV viewing accurately and reliably. It was interesting in the Screen debate that there was also a complaint from the on demand aggregators that BARB doesn’t measure their output yet (although plans are being developed), which kind of suggests even they see the point of BARB really.
The most common criticism of BARB from the online industry is that a sample size of 5,000 is almost archaic in the age of big data. Such complaints display ignorance in themselves; BARB’s sample size is over 12,000 people in more than 5,000 households. It is enough to provide an accurate and consistent measure of most TV viewing, certainly the viewing which attracts the vast bulk of TV revenues.
This is not to say that BARB shouldn’t evolve to match the changing demands of the digital media landscape, but so far it has managed pretty well. The main criticism is that it has been slow to measure on demand viewing via other screens, but this still accounts for less than 3% of total TV viewing, so it has not been a priority until now. That said, BARB is already moving away from its core objective of ‘measurement of in-home viewing via the TV set’.
Over the coming years I think we can expect to see BARB measure more forms of TV viewing, wherever they occur. In order to keep up with the viewing shifts that are constantly evolving, we can also expect to see it fuse or merge with third party data – perhaps from server data or separate research studies – to provide a ‘Silver Standard’ service for the less mainstream forms of viewing.
What we won’t see in the foreseeable future is a rival service based on ‘big data’ and very different methodologies. There has been talk recently of social TV services such as Zeebox providing alternative viewing measurement based on possibly hundreds of thousands of contributors. Good luck with that I say – but until such a service can address the points I have raised in the case for BARB’s defence I think we can safely say that it will be around for quite some time to come.