Focus groups can often provide too much focus
Getting people to talk in depth about a particular subject or piece of content can be useful, but we should never expect that sixty minutes of talking with a bunch of random individuals will uncover anything significant, and in many cases, focus groups hold the potential to do more harm than good, says The Media Native. So how should we go about gaining valuable insight?
I’ve been celebrating quite a lot over the past week. My beloved Man United won a record 20th league title and I have finally launched a research product that I initially developed exactly 20 years ago. I won’t dwell on the former, but the latter has got me thinking about how we use focus groups.
In the early 1990s, I was in charge of audience research for the ITV network. A big part of my role was to provide research support for the programme commissioners, who were spending hundreds of millions of pounds on original programmes, based primarily on a gut feeling. The hit rate for all new programmes that made it to broadcast in the first place was approximately one in three making it to a second series.
The standard response to a particularly unusual, intriguing or politically sensitive programme pitch would be “let’s run a few focus groups and hope for the best”. Occasionally we would have had a finished pilot to show, in which case the results could be useful. More likely, we would have a few animatics and some pictures of the talent, and the six to eight participants (why is it always six to eight?) would fill an hour with ‘insights’ that may have been focused but were rarely consequential.
I had a problem with using focus groups to help us decide how to spend all that money; they were based on a small sample (even smaller if you correct for the one or two most forthright individuals in each group) and often yielded inconclusive and sometimes contradictory results, which the programme commissioners loved because it allowed them to ‘bend’ the results to whichever conclusion they had already drawn.
But my biggest concern was with the term itself; focus groups. Why did we spend an hour or more focusing intently on a decision to view (or not) that would be taken in seconds in real life, with very little thought or focus attached? It didn’t make sense.
We’ve all sat through them; those focus groups where the poor moderators struggle to fill the allotted time with meaningful discussion that doesn’t constantly stray into repetition, or where a few outspoken individuals create a sense of consensus that you just know doesn’t exist. They are often held in overheated rooms in suburban streets or in front of one-way mirrors that cannot conceal the whoops of delight from the room behind it whenever a respondent says something positive about the brand, product or content in question.
In fact, I’d go so far as to say that, in the majority of cases, focus groups hold the potential to do more harm than good. It is no coincidence that the IPA meta-analysis conducted by Les Binet and Peter Field showed a very weak, often negative relationship between campaign pre-testing and advertising effectiveness. Many of those pre-tests would have been conducted using focus groups or some associated methodology.
I accept that getting people to talk in depth and at length about a particular subject may be useful, if that subject is important to them and the decisions they take around it are based on lots of rational thinking and in-depth analysis; issues around health, perhaps, or even politics, maybe fashion (for some) and sport (for others) are all subjects that might make the grade in terms of being important enough in people’s lives that they will be able to discuss them at length and more than willing to do so.
But we can never expect a piece of content – or a brand even, to hold such a central place in our cognitive priority list, so we should never expect that sixty minutes of talking about such topics with a bunch of random individuals can ever uncover anything significant about their place in those respondents’ lives.
The research product I referred to in the opening paragraph is a means of evaluating new programme ideas, by testing their appeal within a meaningful and relevant context, asking respondents to create their own perfect schedule of fictional programmes, and then asking them why they made those decisions.
It provides benchmark data, ‘what if’ scenarios, qualitative outputs and many comparison points, based around a sample of 1,000+ people. We launched it in the UK and Sweden (I’m working with Swedish research company NEPA) for two major broadcasters, and the results have been encouragingly reliable and valid. Most importantly, they have provided lots of deep insight and a positive track record so far of predicting future success. And not a focus group in sight.