Online research: The good, the bad & the (sometimes) very ugly
A new series of blogs about the broadcast industry, narrated by David Brennan…
I was asked to contribute a three minute review of the good, the bad and the ugly in media research at the recent annual MRG conference. Knowing that limiting my answers to just three minutes would be a major challenge, I decided to give the same answer for all three; online research. Needless to say, I still went over my allotted three minutes!
I didn’t just choose online research as the answer to all three questions in order to save time; I also believe it has had a phenomenal impact on the media research industry, indeed on market research as a whole, and that impact has been both positive and negative.
The good is easy. Online research has revolutionised the media research landscape by making the basic research we have always done much more cost-effective. I still remember the days when a couple of telephone surveys and a few focus groups would be all we could afford.
Online research has lowered costs dramatically, reduced respondent burden and enabled almost real-time reporting of results. It’s a long way from the days when we would pay £30,000+ to commission a fairly straightforward quantitative survey – and then have to wait well over a month just to get a basic cross-tab analysis.
Online research has enabled research that might never have been possible before. The IPA Touchpoints study is a case in point. With portable devices that can record and communicate data in real-time, it has revolutionised how we can track people’s media behaviour, without having to rely on their memories via paper diaries and at a fraction of the cost. There has been a great deal of innovation in online research recently and standards are undoubtedly rising.
With the cost efficiencies of online research put to good use, it also means most media research departments have more money to spend on the new research techniques that are opening up insights that traditional methods could never achieve; so we have media owners, agencies and marketers regularly using neuroscience, implicit attitude testing, biometrics or ethnography to take us well beyond simply asking people questions and taking their answers at face value.
The bad is also easy. When research costs a lot of money, we put a lot more effort into making it accurate, reliable and credible. I would regularly pilot questionnaires, often accompanying the researcher so I could see and hear how respondents interpreted the questions and understood the flow of the survey. We would make sure the sample was representative and recruitment was quality-controlled. We would polish it like a precious stone; in financial terms it was much more valuable!
Now that online research is so cheap and available, it is no surprise that there is a great deal of throwaway research out there. It is easy and cheap to create a piece of research purely for the headline it will generate (it doesn’t help that the industry trade press does not have the skills to separate the good from the bad).
Samples are skewed, often towards the heavier online users, and respondents are self-selecting. The issue of ‘professional respondents’ has never gone away. I often still see recruitment notices for online panels outlining how much money they can earn. There is little thought given to the research context – for example, it used to annoy the hell out of me when I was at Thinkbox that an online sample would be used to ‘prove’ that online was the most effective medium. Data analysis is often basic and shallow.
The ubiquity of online research means we are in danger of suffering a data overload, so much information flying at us that we become incapable of sorting the wheat from the chaff. If left unchecked, it can lead to a crisis of confidence in the research we carry out.
The ugly required a bit of effort on my part. In order to make my MRG case as convincing and well informed as possible, I went undercover. For a couple of weeks, I inhabited the sometimes shady world of online panels as a 42 year old housewife called Gladys! I took part in online surveys for anything from sausage casserole sauces to connected television, two subjects close to my heart.
At the end of the process I felt I understood more about sausage casseroles than smart TVs, but that was because the ugly side of online research is in the questionnaire design itself. It was truly shocking, uninspiring (still far too many attitudinal grids for example), highly repetitive and with no sense of flow, relevance or logic. Even with my professional interest in the process, I just wanted it to be over with, and I’m sure my answers reflected that fact!
But, of course, it is not online research per se which is good, bad or ugly; it is the way we use it. I know that most people in media research are highly mindful of these issues and would never dream of using this fantastic new research vehicle in this way – but there are those out there who are perhaps more Lee Van Cleef than Clint Eastwood and we should be prepared to run them out of town!