By Andrew Dalglish - 16th June 2010
We recently asked readers to pose their research questions on the B2B Marketing LinkedIn group. One question came back more than once. “In a B2B context how can I ensure a survey is reliable?”. A big question but let me give it a shot.
Reliability boils down to whether the right questions have been asked to enough of the right people.
Survey questions need to be unambiguous, unbiased and asked in a standardised, objective format. Beware of three classic mistakes here:
The next challenge is to ensure the survey has accessed those whose opinions really matter. Ask yourself:
And finally the perennial challenge. How many interviews is enough? One approach is to use a test of statistical precision known as the ‘margin of error’. This calculation indicates the degree to which answers might vary were the same survey to be repeated 100 times, e.g. a 3% margin indicates that in 95 out of 100 identical surveys the result would vary by no more or less than 3%. Aiming for a low margin of error is the ideal approach but is not always practical in B2B surveys where there is a small pool of potential participants to draw on who are not always accessible. An alternative then is to use the margin of error alongside four rules of thumb when specifying a target number of interviews:
Find out more about Circle Research’s B2B research services here.
Enjoyed this post? Subscribe and receive new posts by RSS
Andrew has specialised in B2B research for over a decade and co-founded Circle Research in 2006. He is a columnist for B2B Marketing Magazine, a regular contributor to Research Live and frequent speaker at leading events such as the B2B Leaders Forum, Customer Experience Live and the Social Media World Forum. Andrew is a Chartered Member of the MRS, teaches the MRS B2B research course and holds an MA in Psychology from Aberdeen University alongside an MSc in Marketing from Strathclyde University.