Skip to main content
Qualtrics Home page

Market Research

Why cognitive testing reaps ROI for market researchers

Originally published on Quirks.com

Imagine that the local dairy farmers in your area want to know how much milk people drink in a week. So they send out a questionnaire that asks, “In a typical day, how many glasses of milk do you drink?”

John and Selma both drink 12 ounces of milk every day. Selma, however, thinks of a “glass” as exactly 8 ounces, so she says she drinks 1.5 glasses of milk each day. John, on the other hand, thinks of a “glass” as the object from which he drinks. He is not thinking about how much milk the glass holds. He says he drinks one glass per day. Although John and Selma drink the exact same amount of milk a day, they provide very different answers because they interpreted the word “glass” differently.

Tamara also says she consumes one glass of milk a day. But when she reads the word “milk,” she thinks of soymilk because that’s what she drinks. But the dairy farmers just want to know about cow milk consumption, so the answer Tamara gives isn’t quite what they were asking about, either.

And the question is still loaded with other ambiguities. Some respondents may report the milk they use in cereal or cooking, while others will not because they do not consider that “drinking.” Most people will be able to answer the question – and the answers will look reasonable – but the data won’t be very accurate.

As researchers, we like to ask questions to measure people’s attitudes, opinions and behaviors. But if respondents don’t understand the questions in the same way, the data that’s meant to drive action and insight won’t be reliable enough to do so – or worse, it will provide the wrong insight and inspire the wrong action.

So how do we avoid wasting time and money on faulty data? One way is to invest in cognitive testing.

What is cognitive testing?

Cognitive testing – or cognitive interviewing, as it’s also known – is a qualitative evaluation of how respondents understand and answer questions that helps researchers determine whether they need to change the questions to improve data quality.

During testing, a researcher trained in cognitive interviewing methods administers the questions to volunteer participants who are representative of the research’s target population. Participants are then instructed to think aloud as they answer the questions so the cognitive interviewer can understand how they interpreted the question and came up with their answer.

Once a respondent answers the question, interviewers will administer follow-up questions like, “What does this question mean in your own words?” or “How did you come up with your answer?” to further understand participants’ cognitive processes as they answer the questions.

Cognitive testing is like insurance for market research

Cognitive testing is a common practice among government and academic researchers, but at times market researchers skip over this step. Unfortunately, this costs both time and money.

Market research tends to be a bit faster-paced than government or academia. Speed is often such a priority that market researchers are hesitant to include new processes or extra steps that might slow the process down.

As market researchers, we should think of cognitive testing as insurance for our research. Yes, it may require more time and investment, but not doing it may result in research with skewed data that will have ultimately been a waste. Without cognitive testing, there’s no guarantee the data is measuring what it’s meant to.

Faulty data may also surface an erroneous relationship, or one that doesn’t actually exist, and an organization could spend significant amounts of money targeting ads to a certain segment of people without seeing any real growth. Market researchers that invest in cognitive testing will see a compelling return on their investment and will avoid the negative consequences of using data that doesn’t accurately reflect what they were really trying to ask.

Case in point: Let's say your company sells products to physicians that make their jobs easier. You conduct a market segmentation study to figure out what type of physicians a particular product will appeal to most. You ask physicians about their patient volume because you believe that physicians with a high caseload will be more interested in your product than others.

You conduct your study but find that patient volume is not related to intent-to-purchase. So you don’t target physicians with high caseloads when marketing your product. But in reality, you’ve missed a relationship between patient volume and intent-to-purchase because of a poorly-worded question.

You asked, "How many patients do you see in a typical month?" But for a physician, "patients" could mean patient visits or unique patients. A physician may see the same patient multiple times a month because of chronic conditions or follow-up care – and thus might have 30 unique patients but 60 patient visits.

So one physician may say he saw 30 patients, while another says she saw 60. As a result, the data has a lot of noise, harming your ability to see how variables relate to key outcomes of interest, like intent-to-purchase.

How to write better questions

Writing better questions – and conducting efficient cognitive testing – requires understanding how respondents think about and answer those questions. In the book “The Psychology of Survey Response,” the authors identify four major cognitive processes that affect a participant’s response:

Comprehension: How well do respondents understand the question and the words you are using? Avoid long and complex questions, as well as industry terms with which your respondents aren’t familiar.

Ask yourself how well your respondents will understand the intent of the question you’ve asked. Let’s say you ask a participant how satisfied they are with their marriage, then you follow that up with another question that asks them how satisfied they are with their life overall. The respondent may read the second question and think, “Well, they just asked about my marriage, so they must want to know about all the other aspects of my life, not including my marriage.” Make sure respondents aren’t interpreting questions differently than you intended because of their order or phrasing.

Retrieval: Are you asking people about events or feelings that are so far back your respondents may not adequately remember them? Or are you asking them about a time period so short they may inadvertently include memories from a time much further back than you wanted to know?

Consider whether your question needs a reference period, and, if so, make sure your reference period is appropriate for what you’re asking. Ask the question in a way – or preface it with other questions – that better activates the memory you want to ask about. Market researchers commonly ask questions that set unrealistic expectations for what customers can remember about purchasing decisions. If you’re asking about the last time someone bought socks (or another non-salient product), respondents may not adequately recall much of their decision-making process or experience.

Judgment: Respondents may make a judgment (often subconsciously) about what’s relevant to your question, how much energy they want to expend thinking about it or how honestly they want to answer. Let’s say you ask participants how often they exercised in the last week or how many hours of television they’ve watched. It’s likely they’ll self-edit their answer to represent something more desirable or socially acceptable – whether intentionally or not. This often happens when the question has a socially desirable response. Find out how to better ask sensitive questions here.

Response: How well do the response options you provide match the answer the respondent has in mind? If you ask college students, “Were you satisfied with the quality of your college education?” and the response options they’re provided with are “yes” or “no,” you’re likely to miss out on relevant data. Make sure you provide a way for those who feel somewhere “in between” to answer truthfully.

How to begin cognitive testing

If you’re worried about the time and cost associated with cognitive testing, start small. Conduct a round of cognitive testing with just five people. Once stakeholders realize how cognitive testing improves the data you collect, they may be more eager to invest.

As you graduate to more advanced cognitive testing, test in iterative phases with about 10 participants at once. Then, revise your questions and test again. Keep testing until you stop learning something new.

When you recruit participants, think about your target population and the type of people that are likely to have different experiences that may affect how they understand and interpret questions. Doing so usually means gathering a diverse group or even forming subgroups that are particularly relevant to your research.

When you begin testing, use questions like the following to ask respondents about their thought processes:

  • Can you tell me in your own words what this question is asking?
  • How did you come up with your answer?
  • What does that word or phrase mean to you as it’s used in this question?
  • What makes you say that you are very satisfied with your most recent purchase?
  • How well does that question apply to you?
  • How easy or difficult was that to answer? If it was difficult, why?
  • Were the answer choices provided sufficient, or would you have liked to answer differently?
  • How sure are you of your answer?

You may start to notice the pitfalls in your questions before you even get to the testing phase. And even if you don’t, your testing is likely to catch them before you spend time and money on research that results in faulty data

And as you invest in cognitive testing, you’ll begin to see a very significant and tangible return on investment as you feel confident in the actions you take as a result of the insights you uncover.


Learn more in our handbook of survey question design