Everything you ever wanted to know about marketing research surveys (but were too afraid to ask)

Posted by Andrew Rogerson | 30-01-19
Everything you ever wanted to know about marketing research surveys 710x300


We talk a lot about survey-based thought leadership here at Grist, but the word 'survey' can conjure up different ideas for different marcomms professionals. Some think throwing something together on SurveyMonkey to send to a small client list will give them statistically robust information; others think they need to spend a fortune to get hundreds of thousands of responses before anyone will take them seriously.

When we work with clients on survey-based thought leadership projects, we bring in the professionals at Coleman Parkes. At our recent roundtable sessions, Coleman Parkes’ Stephen Saw ran a ‘survey surgery’ where attendees were encouraged to ask any and all of the questions they had on the topic of conducting surveys for marketing research. 

We thought Stephen's advice was so good we asked him to share the answers to some of his most-asked questions in a blog, and here they are:

How do you generate questions?

This is our most-asked question, and there is no simple answer. There are lots of different ways getting from A to B. Some clients generate questions by committee, ending up with a list of dozens of unrelated questions that have been added by random parts of the business. That won’t get you anywhere but the land of headaches and frustrated respondents.

Question design, though, is the most important part of your survey. You need the right questions to get the respondents attracted and engaged, and you need the right questions to get the right data to feed the right reports. So, we say to start with the answers you want achieve from the survey. Think about the stories you want to get out of the work, or what you want to be able to tell your clients, and work backwards to frame the questions. Reverse-engineer the process.

One thing I get asked all the time is: “Can you save us from ourselves?”. Clients have too many stakeholders come in and end up with a long list of questions but are unsure how to whittle that down without hurting egos. We recommend having a small stakeholder group to work on the survey, and to proritise — most questions won’t inform the end frame or have been asked before.

How do you know who to send the survey to and how do you recruit them?

Question development is the most important thing here, too. When it comes to recruiting respondents, the questions you ask them are very important. If you’re interviewing the C-suite, for example, they won’t respond to a cash incentive; they respond because they’re interested in the topic. And there are limitations on what you can and can’t ask — that’s where we step in. We need to be sensitive to the respondent; if we upset them, they’ll never answer another survey.

We recruit our respondents in various ways. We do have a core list of respondents across most sectors, industries and locations, and we also have access to the same panels of respondents as all the big research firms. We can also include client and prospect lists if a particular client wants to ensure their own influencers are covered.

We make sure we give our database a rest, too, so they’re not being asked to answer surveys every week. We don’t pester people with surveys, and we do incentivise responses. Knowing the right incentive is key. Businesses will be given proprietary research or a copy of the research report that the survey is for - with the client’s permission, of course - while consumers will often be offered a cash voucher.

When we get a brief, we look at the profile of ideal respondents and do a feasibility test to make sure we have enough in our database to make the answers statistically robust. All respondents consent to our research twice before taking part - at the initial recruitment stage and at the start of the survey - and we don’t recruit by cold call.

What’s the typical attention span of a respondent?

It depends on the level of seniority, generally. C-suiters will typically respond to telephone more than online; they’ll spend up to 15 minutes on a call – you could push them to 20 minutes if it’s a really engaging topic. Lower-level managers and consumers may spend a bit longer with you online (around 20-25 minutes), but 15 minutes is a good rule of thumb.

There are lots of tricks and functions to make sure respondents are engaged in the survey and giving quality responses to each question, and not just saying yes to everything. For example, we randomise the way options appear. We time every question to find the average speed to an answer; if an answer comes too fast or too slow we know that respondents are not fully engaged. You can tell when someone’s not engaged on the phone, but online you need to have controls in place to make sure the sample is behaving and completing the survey fully. In all methodologies used, whether it be telephone or online, we always quality check the data at a respondent level. If a respondent seems unengaged, we remove them from the data set. It is a time-consuming process but ensures quality data in the final data set.

For example, if we need to get 200 respondents we purposefully over-recruit to about 250 and go through anonymised respondent-level data, removing respondents that don’t pass our quality checks. This ensures we have 200 good quality responses. If we go under 200, we go back to the field and repeat the process until we have 200 good quality responses.

What is the value in going back to questions asked in previous years to provide a different angle or comparison?

Otherwise known as trend analysis, tracking questions are good for trend building. When you are tracking questions year on year for example, the most important thing is to ask same question in the same way giving the same options to the same people to keep yourself safe. People will look for any reason why a trend may form, especially if it goes against common thinking, so you want to make sure you’re comparing like with like. Treat it like a control and test experiment in a science lab - it’s essential the tests are fair and comparable. It’s also really important you have robust data to make an informed analysis. Make sure the statistical significance is 95 to 98% and the questions are watertight.

But tracking does form very good stories. The media loves a good statistic that shows movement in a group of people or a sector, so if you’re looking for press coverage this could be a good way to go. It can also help your sales team to drop statistics into pitches and sales outreach - if 80% of businesses are more worried about Brexit implications now than they were two years ago, that’s something everyone in the company, from sales and marketing to operations, can work with.

What is “statistically viable”?

As a rule of thumb, you need 75-100 respondents in each cell that you want to compare to another cell. To compare one sector to another and infer findings, you need around 100 in each sector, but if the market is very small, you can go lower.

It’s also important to remember that research can only be at a maximum 99% accurate unless you interview literally everyone in the industry or market. Research of this kind is about using statistics and science to say we’re confident the data is an accurate reflection of the market, and the confidence level ranges as the sample size increases. Another way to look at it is that you should be able to repeat the same survey and get the same responses 9 out of 10 times.

That said, the incremental increases in confidence may not be worth the added expense. If you can get 500 respondents with 98% accuracy, but going up to 2000 will only get you 99% accuracy, is it worth spending four times as much to get the same response? We always say you should never get upsold too much on sample size as it might not be worth it. But if you are looking to slice and dice the data to compare for example different countries, sectors, company size, business functions, etc, you may well need a large sample to start with, and in some case 2000 respondents is the perfect sample size.

The best thing to do is identify what analysis you need for your reports before the survey launches and from there work out an adequate sample. Again, reverse engineer the process before launching the survey. You don’t want to be in a position after when the sample is too low for what you need.

The Value of B2B Thought Leadership Survey 2018: The Growing Importance of Data to the C-suite

Andrew Rogerson

Written by Andrew Rogerson

View Bio

We'd love to hear from you

Get in touch