Mobile navigation

FEATURE 

Internet research – myths, reality and somewhere in between

The advent of internet research tools – often free – has seen an explosion in use as publishers, editors and marketers all aim to increase understanding of their readers. Anthony Ray, a member of the Market Research Society, offers a few thoughts on what to look out for.

By Anthony Ray

The impact that the internet has had on the market research industry has been profound. The reasons are simple: the cost per completed survey is a fraction of that experienced with the traditional postal survey or telephone interview. Not only that, but the time taken between despatching the survey and analysing the results is measured in days rather than weeks or months. The charts below show the volumes of internet (left hand chart) and telephone respondents (right-hand chart) over the period 1999-2002. By 2002, internet interviews had increased 30 times to a total of 743,000. By comparison, telephone interviews had seen a 25% decrease to 2.4m.

All this has led to a dramatic rethink in the way researchers look at the internet. Take TNS, one of the world’s largest research agencies. They own what they claim to be the highest quality panel in the US: 3m people online providing 18.5m interviews each year. In 1999, just 10% of TNS’s US panel turnover came from internet work. By 2003, this had risen to 70%.

Meanwhile back at the office…

I remember the potential of internet surveys first dawning on me five years ago when I was at the EIU (Economist Intelligence Unit). We were in the early days of putting together an online store and I had lobbied hard for investing in a survey tool that would help us understand this new business model. The results were fascinating – giving us a rich base of information on not only existing customers who were increasingly migrating towards online information sources but a new audience who had no prior experience of EIU information. And now there’s hardly a publishing company in the UK that hasn’t taken advantage of this method of reaching out to readers, many taking advantage of free or inexpensive and easy to use internet tools such as Zoomerang and SurveyMonkey.

But it’s not all good news because, without the proper checks and balances, seriously misleading research can result – leading directly to inappropriate initiatives. A few of the more common pitfalls follow.

1. Bore your readers at your peril.

It’s amazing how quickly a piece of research can become a list of ‘would like to know’ questions emailed through from the various departments. And the wider the research ranges, the more cumbersome it becomes. Respondents behave very differently on the internet than they do say with printed surveys. Just bear this chart in mind next time a compendium of requests flies into your in-tray – and see what happens to respondents’ motivation and drop-out rate after 10 minutes.

-Cum % respondents:
"Ideal lengths for surveys"
Cum % respondents:
"Longest survey prepared to complete"
About 5 minutes96%99%
6-10 minutes76%96%
About 10 minutes64%92%
11-14 minutes33%70%
About 15 minutes27%65%
16-19 minutes9%43%
Over 20 minutes1%20%
Source: Understanding the online panellist, P Comley (ESOMAR)

This can be a particular problem with advertiser surveys which often become a slog through very similarly worded questions regarding awareness of various advertisers and image statements.

It’s an interesting point to ponder, but if the first survey you send to your readers is a bit of a marathon, consider what will happen not only to the response rate, but their likelihood of replying to another of your surveys in the future. Let’s face it – would you bother again or would you press delete?

2. Beware of a rose tinted view.

Have you ever felt that your internet and postal subscriber studies seem a bit too good to be true? It may well be that they are, due to the problem of self-selection.

Self-selection is a well-documented phenomenon that happens when the onus of replying to the research lies entirely with the respondent. It happens because the people who are most likely to respond are those who have a greater degree of ‘emotional attachment’ to your magazine than others. The agnostic, ambivalent and especially the disengaged simply don’t bother: why should they? People with a real problem will, but normally they’re a small minority. So you end up with an in-built skew in your results. It’s a really common problem and it results in a false sense of security. A few thoughts about what you can do about it:

* Mix your methodology - use telephone researchers to target problem segments such as lapsed users. Inevitably this will bump up the cost of the research, but it’s the best way of ensuring you’re obtaining feedback from a good cross section of readers.
* Establish quotas for certain reader types (attitudinally or based on frequency of readership). A common flaw with cheap internet survey tools is that they provide little more than straw polls. A properly structured sample needs the ability to stop the survey when particular reader segments reach a certain number of completed surveys. If you’re serious about your research, you need to invest in a survey tool that has this capability.

A related problem can also arise with editorial panels or advisory boards – invariably a small group of people with an atypical understanding of both the editor and the product. These panels can result in a bad case of ‘group think’ which is divorced from reality. They do have a useful role to play in the publisher’s armoury, but it pays to change your panel members often.

3. Isolate the different voices.

It’s important to look for the commonalities that bind your readers together. However, it’s very important to ensure they really are commonalities - especially if you’re looking at one set of research results with no cross breaks. A good example of this was when I was working with the editors on a risk service. They had got together with their advisory board and brainstormed some new additions to the service and there was one they were really keen on. In came the research results, and ‘the banker’ came in about half way down the rankings. We then segmented the findings and found that this banker had been received very well by analysts within commercial banks, but because they were relatively few in number, their voice had been lost. Vitally, this same segment accounted for well over half of the total subscription value. A completely different picture then emerged because the voices had been disentangled. This new knowledge transformed the product development priorities.

4. Know your margin of error.

Another common problem occurs when ranking the preferences of your readers. It’s important to keep in mind the relative size of your respondent base compared to the universe. Often (particularly with niche products that have relatively small audiences) the margin of error can be quite large. As a rule you should aim for a sample size that provides you with a sample error size of around +/- 2-3%. In other words, the result 63% is statistically no different from 60%. Without this knowledge it’s easy to misunderstand exactly when a result is significant or not. There are plenty of tools you can use on the internet that can help you with calculating all this; the one I use can be found at http://www.dssresearch.com/toolkit/default.asp

5. Forcing the question.

It’s easy to forget, but one of the most fundamental areas of bias lies not just in the formulation of the question wording, but what you include (and leave out) in the answer options. The biggest thing to be aware of with any internet survey is that respondents will take the line of least resistance. So if you ask them to choose between six different choices, they will. Few (around 10% on average) will bother typing anything in the ‘other’ box. The problem occurs if you read this as a true finding – it may well not be. Too often the choices are crafted from a product-centric viewpoint. It’s worth proofing your survey by re-reading recent qualitative research and refreshing yourself with the subscribers’ view of life and checking that the answer options reflect the marketplace, not your organisation’s preferences.

Getting to the root of an issue, rather than being fobbed off with vague phraseology is similarly essential. Ever seen a piece of research from your telemarketing department that claimed 15% of lapses stopped their subscription because it was "too expensive"? This is a classic area where you need to be absolutely clear about what is being meant. Is "too expensive" one of the pre-coded responses used by the telemarketers? Or is it a catch-all summary phase used to encapsulate all manner of related problems? To take this further, if you read the transcripts from the telephone call, you may well see that there could be many different facets that lie behind this issue. So the real root cause could vary between: price, other titles meeting needs better, substitution through internal circulation lists, changing user behaviour or a marketing offer that has undercut the chances of renewal.

The danger in accepting vague phrases like this at face value is all too real – leading directly to misinterpreted research and inappropriate actions being taken.

6. Context is everything.

It is surprising how many surveys go straight into asking questions about the product. Context is so important. Without it, you have nothing external to benchmark against - so are your results good or mediocre? You won’t know. Imagine you’re trying to gauge reader interest in a range of editorial topics. By itself, this is not very valuable as an empirical finding.

Now imagine you used that same list and asked readers how important each topic was to them as an aspect of their information diet, establishing market need.

You can then gauge how well your product is doing on the dimensions that really matter to your customers. This establishes your performance. Are you strong in the areas that count? Or are your strengths in the less important areas of market needs?

Then you can use that same list and ask people to choose between your magazine and your closest competitor - who does best? This enables you to assess your competitive edge. The combination of these three questions can have a profound impact on customer understanding.