|

Comparable and trusted metrics: something to sing about

Comparable and trusted metrics: something to sing about

Ahead of this week’s Future TV Advertising Forum, BARB’s Paul Smith explains why establishing the right metric for measuring viewing behaviour online is so important.

There are many voices in a choir, each with their own range and tone colour. They’re all different, but when they come together, they can create something truly powerful.

If you’re reading this, you’ll know that the various TV players owned by the UK broadcasters feed their individual parts into the joint-industry census-based picture of online consumption of television content that is BARB’s TV Player Report.

This report sits alongside our measures of traditional broadcast media and the numbers behind it will feed into the cross-platform measurement that we will be generating through Project Dovetail.

BARB has a proud reputation as a provider of gold-standard, rigorously controlled, trusted figures that reflect genuine viewing behaviour, so it’s vital that we are transparent about how the TV Player Report is produced.

So, with the help of our stakeholders, we designed the key measure in the Report, Average Programme Streams, to provide numbers comparable with the Average Audience metric that our customers already use to establish the success of their programmes.

[advert position=”left”]

Calculating average programme streams for a programme is simple; you take the total play time for the programme over the reporting period and divide by its length. So 1000 minutes total play time for a 10 minute programme gives 100 average programme streams.

Like average audience, this metric relates audience to duration and so allows measurement of a programme’s true popularity, going beyond more cursory numbers such as a simple play time aggregation or a count of the number of times the PLAY button was hit.

It neatly side steps the perennial conversations about devices versus people, too – though we’ll have plenty to say about people when Project Dovetail comes on stream (forgive the pun).

Crucially, the average programme streams metric is approved by JICWEBS, the industry body overseeing online media measurement standards. This means that the wider online media industry has reviewed and approved it as an open, transparent and auditable best practice metric.

We’re now pleased to report progress on the next logical step – reporting on the consumption of the advertising that accompanies programmes on TV players.

The metric we intend to use, average ad streams (sound familiar?), and the approach we have taken has just been approved as an industry standard by JICWEBS. Again the metric is based on an average duration audience and so is comparable with TVRs that our customers are used to calculating for television campaigns.

Having comparable metrics is only one part of getting the measurement and reporting of online viewing right.”

We’re determined to keep things simple and compatible, so all you’ll need is to be a BARB subscriber running video ad campaigns on video ad servers supporting the well-established VAST 2.0 protocol.

Data will be collected from the broadcasters and aggregated by BARB’s research partners. We are working on the timetable to publication as part of Project Dovetail and will have more news on this in the New Year. A whole new ensemble singing a whole new chorus.

Having comparable metrics is only one part of getting the measurement and reporting of online viewing right. All published TV players must have passed an audit by the industry-owned media auditor ABC or they can’t be part of the report.

Why this reliance on audit? Delivering data that are audited alongside other industry bodies is new to BARB – further evidence that collaboration across industry is now needed to manage the complexity now inherent across media measurement.

ABC is a perfect partner because it understands our needs, and as an industry body reporting in to JICWEBS provides a cornerstone of assurance in the media industry.

We use their expertise at a number of levels to give overall confidence in the tools and the system, so that you can have even greater confidence in the data.

Vitally, they check that each different player – each singer in our ensemble – copes with a variety of use cases, reports play time accurately and logs appropriately filtered, human-generated traffic.

But they also look at the central systems that collect the data and generate the final numbers to ensure these systems are reporting accurately and do not exhibit any usage patterns that raise suspicions the system was “gamed” – listening for any strange notes in the overall chord.

Let’s be realistic, though. We’re measuring devices, not people. The industry is always in an arms race against clever coders finding new ways to generate non-human viewing. So we rely on JICWEBS’ robust standards, backed up by the system’s proprietary filtering methods.

This means that when you see a figure in the TV Player Report, you can be confident that it’s complying with industry standards that have been developed to be in tune with established television audience metrics. You can also be confident that the traffic has been appropriately filtered.

The TV Player Report is up and running and we expect it to become an increasingly comprehensive measurement of viewing through online TV players.

Watch this space, and expect us to measure you watching it.

Paul Smith is project manager at BARB.

Media Jobs