Skip to content
Ideas

How to Avoid False Insights in Your Research

It's easier than you’d think to glean false insights in your research. Here are some of the common mistakes researchers make—and how to prevent them.

Words by Nikki Anderson-Stanier, Visuals by Allison Corr

Getting solid insights from a research project is the dream. There is no better feeling than being able to write and present these compelling insights to your team and have them go off into the world of bettering an experience.

But sometimes as I’m writing my insights, doubt begins to creep in. How do I know that this is, in fact, a true insight? What if this is a false positive or a false negative?

Below, I’ve gathered strategies to quell these anxieties and fears and increase the likelihood that your research insights are on point.

User research is not the answer

Many teams believe that my insights will give them the absolute answer to a problem they are trying to solve. However, I’m constantly reminding people that we cannot rely on user research to give us a concrete answer. This is especially true when it comes to qualitative data.

If we believe our insights are the 100% truth about a population, we won't use them wisely. Always remember that we are dealing with humans. Humans don't fit into the boxes we want to put them in. There are many confounding variables for people, and we need to keep that in mind.

User research is a guiding tool to help us think creatively with people (customers, non-customers, users, etc.) in mind. It is a data point to help teams make better decisions moving forward, and ensure they heavily consider unmet needs and pain points during the decision-making process.

Remind your teams that you are here to bridge the gap between users and the organization and help make decisions, not give a final yes/no answer.

Always remember that we are dealing with humans. Humans don't fit into the boxes we want to put them in.

Nikki Anderson-Stanier
Founder, User Research Academy

Check your biases

Interviewer biases are those we experience as user researchers. Although they are unconscious, they are easier to mitigate since we are more in control of them. However, we must accept that we all have and frequently encounter these biases. Here are the most common biases that can skew your results.

Confirmation bias

Definition: This is one of the most common and fickle biases a user researcher will encounter. We like data, quotes, or insights that confirm our existing hypotheses/beliefs and tend to ignore whatever challenges them.

Biased user research example: "How often would you use the 'transcribe video to text' option in Netflix?"

Rewrite: "Tell me about the last time you used a 'transcribe video to text' service." -> "In what situation did you use it?" -> "What was that experience like?"

How to avoid this:

  • Avoid asking about preference
  • Don't ask participants to predict whether or not they will "love" or "use" something
  • Use open-ended ways of framing questions
  • Ask how frequently a participant used a feature or performed an action in the past
  • Watch what they DO instead of only listening to what they say
  • Always question your insights during the synthesis process

Sunk Cost Fallacy

Definition: The more we invest in something emotionally, the harder it is to abandon it. We are much more likely to continue an endeavor, continue consuming, or pursue an option if we've invested time, money, or energy.

Biased user research example: "Did you stay with your current video streaming service instead of switching to Netflix because of the price increase?"

Rewrite: "What are some concerns about switching to a different video streaming service?"

How to avoid this:

  • Write down assumptions and feelings before a research session
  • Utilize the Jobs to be Done methodology, which helps to understand why people aren't switching products
  • Analyze and synthesize research with others to avoid favoring specific insights

Clustering illusion/bias

Definition: We believe inevitable "streaks" or "clusters" in data are non-random due to our inability to predict the amount of variability likely to appear in random data samples. In essence, we see patterns where they don't exist.

Biased user research example: We set out to understand why people unsubscribe from Netflix. We believe it is because of a recent price increase. After five interviews (out of the planned 15), we hear the first four people have mentioned the price. We believe this is sufficient data to rework the pricing strategy.

Rewrite: Finish the interviews before making decisions on why people are unsubscribing. Then, look at all of the evidence. For example, although many people mentioned price, there may be other potential—and more important—reasons for unsubscribing.

How to avoid this:

  • Analyze and synthesize research results with others (first individually, then come together to discuss as a group)
  • Conduct research with a diverse set of users
  • Consider evidence equally, not just the ones that confirm your belief/assumptions
  • Test with enough users

Get the right participants (and enough of them)

If you want to get insights about how parents-to-be plan the arrival of their child, it would be best to speak to those who will be parents soon, right? Or, if you want to understand the pet adoption process, you want to talk to those who have recently adopted a pet.

It feels obvious, but there are times when we can be in such a rush to talk to users that we don't think of the exact criteria we need to get the best insights. Or we try to take a shortcut and do internal research with employees, which we should only use in very specific cases.

However, if we do research with the wrong participants, we end up with incorrect information.

For example, I worked at a hospitality company and researched how housekeepers get tasks from our digital platform. The only participants I could get a hold of were the heads of housekeeping, but I figured this would be fine. Spoiler: It wasn't.

These participants didn't use our platform and rarely performed housekeeping tasks. So I was forced to ask them to hypothesize about their colleagues, leading to wobbly, secondary information.

When we don't talk to the right people, our insights are significantly less likely to be valid.

Conduct mixed methods or triangulate your data

Whenever we deal with qualitative research, we have small sample sizes. Sometimes, small sample sizes can call into question whether or not an insight is "important enough." Here are two ways I mitigate those fears behind small sample sizes:

Mixed methods research

Mixed methods research combines qualitative and quantitative data to get a holistic picture of your customers. Quantitative research helps us understand the "what," while qualitative research is about understanding the "why." Often, we look at one side or another, but honestly, we need both. There are three main ways you can combine qualitative and quantitative data:

  1. An explanatory sequential design emphasizes quantitative analysis and data collection first—followed by qualitative data collection. We use the qualitative data to explain further and interpret results from the quantitative data. For example, you use a survey to collect quantitative data from a larger group. After that, you invite some respondents for interviews where they can explain and offer insights into their survey answers.


  2. An exploratory sequential design starts with the qualitative research and then uses insights gained to frame the design and analysis of the subsequent quantitative component. For example, you run 15 interviews to understand customers' pain points and unmet needs. Then you follow up with a survey to prioritize and generalize those findings to a larger sample.


  3. A convergent parallel design occurs when you collect qualitative and quantitative data simultaneously and independently—and qualitative and quantitative data carry the same weight. They are analyzed separately and then compared or combined to confirm or cross-validate findings.


Combining qualitative and quantitative research methods allows you to feel more confident that you highlight the most valuable and critical insights.

Triangulating data

Triangulating data means pulling data from different sources. One of the methods of triangulation is using the above mixed methods approach. However, there are other ways to triangulate data. Here are some I use:

Pulling other data allows you to understand if and how your insight has come up in different contexts and add to supporting evidence.

A note on one-off insights

There may be a time when you stumble upon what you feel is gold. However, only one person mentioned that golden nugget of information. Should you report it? Check out this article for more advice on determining the validity and importance of one-off insights.

Take the time to synthesize

In an ideal world, I always say you should take double the amount of time of the interview to synthesize it. That means you should spend two hours synthesizing a one-hour interview.

We don't always have time for the ideal, but we do have to synthesize. Skipping this step can land you in a world filled with unreliable insights. Rushing through analysis can lead to superficial and questionable insights.

If you don't have time for full-blown synthesis at the end of a project, consider a small debrief after each session.

Overall, there is no right way to know if our insights are perfect. Instead, we can take the above strategies into each project and hugely increase the likelihood that we are reporting the most vital and valuable insights to our team.

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest