Skip to content
Ideas

Evolve Your Usability Testing with These Innovative Approaches

With some creativity, your usability testing practice can be more efficient, innovative, and customer-centered. It might actually start to be more fun, too.

Words by Taylor Klassman, Visuals by Thumy Phan

As someone who studies researchers for a living, I’m convinced that usability testing isn’t just one thing. And that’s a relief for me, because usability testing has never really been the thrill of the job for me, if you catch my drift.

It is, however, an invaluable tool for any product/UX function. Its absence is responsible for many-a-frustrating interface (you know the ones).

At its core, usability testing is a form of research for evaluating the usefulness and usability of a product or service or experience. Usefulness helps us understand how a product aligns with users’ expectations. Usability is understanding if people can accomplish their goals on or using that product. And researchers, like any other creative, have expanded this method beyond its core.

For those of you who deploy usability testing in a very rigorous and purely quantitative or metrics-based way, I see you! There is a time and place for “pure usability testing” and I am not purporting to overgeneralize an “end to usability testing,” but rather a moment to dare, or to hope for, its expansion.

Jump to:

An industry in flux

The research industry is in a moment of change and (to many) insecurity: economic instability, democratization, the influx of AI. All insist that researchers become even more creative. Without asking us to be entirely on the defensive (because that’s no fun), I’d rather focus on how we can use our skills expansively to show exactly how useful, powerful, and dare I say, necessary, we are.

What I want to convey is this:

  • Usability testing doesn't have to be a task-based usability test in a lab—with little exploration or experimentation—in order for it to drive measurable business outcomes.
  • Allowing for the flexibility in this method means you can have more influence as a researcher and stay relevant in the broad expertise you bring.
  • Research tooling—like dscout—can help you do that!

In fact, by expanding our definition of usability and our applications, we stand only to increase our impact as researchers on products, teams, and organizations. That allows us to leverage usability testing to gain even deeper insights about our products and our users.

Below I'll outline some basics, put forward some alternative usability approaches, and close with ways you can start weaving them into your practice.

Let's start with some definitions of what usability testing is and is not.

Back to top

What usability testing is (and isn't)

I want to start by making sure we’re using the same language about where usability testing is situated within the broader researcher space. I tend to think of research as taking place across three phases of the product life cycle: foundational, generative, and evaluative.

Some companies (or teams/individual practitioners) live primarily in the evaluative space. It’s important work, but can be reactive. It’s like a chisel, which is great for small changes, but sometimes you need the sledgehammer for deep thinking, to drive change, and impact business needs. Usability has some widely-accepted metrics to map against your business objectives that also make sense to bear in mind.

I really see all three areas as being the key components of a well-rounded, data-driven research program, and overall, for a more successful business.

✔ Foundational

Foundational research reveals strategic user insight. This helps point teams towards the right problems to solve before solutions are developed. Foundational research can reveal business opportunities and even areas for disruptive innovation.

✔ Generative (sometimes called formative)

Generative research involves testing low and high-fidelity concepts along the way to de-risk decisions. This ensures that products being built have a higher chance of success in the marketplace.

✔ Evaluative

Evaluative research gets more granular, to remove friction and improve ease of use. When tied to OKRs or common product metrics, we can show how our work impacts the bottom line.

Now, where does usability fit in?

Usability testing typically lives quite ubiquitously in the evaluative research phase. However, I see usability testing spanning across the generative and evaluative spaces. Here’s how:

In generative research, researchers deploy usability testing to explore an idea or semi-functional prototype in terms of usability heuristics such as:

  • Match between the system and the real world
  • Recognition rather than recall

Asking things like…

  • What are people thinking about when they see our stimulus?
  • How are people reacting?
  • How do they fit into their mental model?
  • Do they understand it?
  • Can they imagine how to use it?

This allows a researcher and a design team to de-risk a product along the way. Now you may be asking yourself, “Isn’t that concept testing or some other evaluative method?” Maybe!

But in this scenario, I’m actually seeing researchers running pretty typical usability testing (think: task-based) but instead of focusing on metrics like ease of use, they are focusing on the meaty topic of perception, first impressions, and using that ever-troubling magic eight ball to predict the future of usage.

In evaluative research, I see researchers deploy usability testing on a functional prototype or live product to look at usability heuristics such as:

  • Error prevention
  • Flexibility and efficiency of use

Asking things like…

  • Are people able to navigate our product?
  • Can people achieve the goals and tasks they need to?
  • How are people using the product?
  • Is it as intuitive as we think?
  • Can users recover or avoid errors?

The key here is to reduce (or sometimes, deploy) friction in a product, optimize a user’s time and energy, and oftentimes, ensure that the product’s UX is shepherding users to their goals that hopefully align with business impact.

What I don’t consider usability testing

It’s important to distinguish this before I outline some of the different ways you and your team might approach this type of testing. Simply asking about current usage habits and patterns of a product during an in-depth interview is not usability testing. Following up on specific pain points uncovered during a jobs-to-be-done workshop isn’t usability testing, even though it illuminates usability issues.

Back to top

Three types of innovation with usability testing

With our definition set, let's turn to what a more innovative and creative usability test might look like. Below are three different varieties. Each attempts to answer a different research question, involves different data types, and is best-suited for specific phases.

First, let me outline the elements of each:

✔ Artifact/stimulus

What kind of "thing" are you testing? What format, phase, and completeness is it?

✔ Phase

In which part in the research lifecycle are you working?

✔ Method

What kind of research design or approach are you using? For example, is it an unmoderated task or a moderated session?

✔ Output

This is critical. What kind of data are you hoping to derive from the test? Are you looking for raw numbers (like the time it takes to complete a task) or something more conceptual (like a recommendation about the layers an app flow should take)?

✔ Driving question

Why are you conducting a usability test in the first place? Usually, this is the stakeholder or business need that you translate into a research design.

Those five variables form the basis of each usability type. Let's look at how together, they can form these different types of usability testing.

Back to top

Type 1: Exploration+

Moderated or unmoderated exploration and follow-ups

This type of usability testing could be argued as a type of concept testing, so feel free to write me off on this. The distinction for me—and how I have applied this specifically as a usability test—is how you’re engaging participants with the stim at hand.

The questions you’re driving towards here are still task- or goal-oriented, focused on ensuring that users can accomplish a given task or job-to-be-done. Users complete those tasks or jobs in a given context, though, and the exploration piece of this type of usability test is understanding more about that context of use. This is where my ethnographic research background really gets to shine.

Good for:

  1. Low-fidelity concepts (storyboards, static stimuli, etc..) or high-fidelity concepts or existing experiences
  2. Generative phase of research
  3. Any combination of moderated user sessions, unmoderated tasks
  4. Insights-focused outputs
  5. Stakeholder questions like, "We want to explore participants’ reactions and expectations to a concept we have, to see if what we’re doing aligns with their mental model before we build it.” Or, “We believe there’s friction across our experience. We expected more engagement and can see that people are dropping off/out of the experience but we’re not sure why.”

What I want to highlight about this type of usability testing is the power of convergence at the depth of moderated interactions and the potential for breadth in an unmoderated setting.

Can you apply the rule of five in a moderated setting to hone in on big-hitting usability issues—and then see how widespread those issues might be via a larger-scale unmoderated study?

You could also do the opposite. Sometimes I find it helpful to survey a large sample of participants to uncover a top list of usability issues and then delve in deeply with a smaller subset of users to understand where that pain fits into their workflow.

Back to top

Type 2: In the wild, moment in time

Exploratory usability testing using a live site, naturally

My research roots are in ethnographic research, so I have a particular fondness for over-the-shoulder or observational research. To me, this is the epitome of the overlap between generative and evaluative research. You can uncover real usability issues in a UI, understand how users expect to use a tool, and how it fits into their day-to-day.

Good for:

  1. Existing experiences or high-fidelity concepts
  2. Evaluative or generative phases of research
  3. Unmoderated tasks
  4. Insights and numbers-focused outputs
  5. Stakeholder questions like, "We’re about to go after a redesign, how usable is our current product? How are folks currently using the tool?" Or, “We don’t have an overall sense for how usable our product is, but we have a hunch there are some issues. What should we invest in fixing first?”

These observations are powerful in a moderated setting, but as you know, that’s quite time-consuming. It can also have some risks in this type of study where you are attempting to capture real-time usage.

This method can be deployed in a study (like a diary study) where you ask participants to do the task on their own and simply record or report back on that experience.

I rely on follow-up metrics via UMUX (or SUS, if that is still your preference) or SEQ after completing the task at hand.

Again, as an ethnographer at heart, I also rely on a video reflection or longer-form open-ended follow-up to help give some color to their Likert response(s).

Back to top

Type 3: Benchmarking

Unmoderated quarterly usability benchmarking

I’m a qualitative researcher, so benchmarking is a bit foreign to me as a general tool in my usability testing toolkit. Benchmarking is really powerful when comparing usability for a tool over time, or even comparing competitive tools that serve similar purposes.

Good for:

  1. Existing experiences
  2. Evaluative phase of research
  3. Unmoderated sessions
  4. Number-focused outputs
  5. Stakeholder questions like, "We need to understand a baseline of how usable our product is as a KPI to use alongside our other behavioral metrics.”

Benchmarking can be a great way to represent ROI to your company or stakeholders. Proving ROI in research is a complicated web and can be incredibly taxing to track.

Researchers can make a program of deploying benchmark metrics to show change over time, or track against releases. It can also be a powerful way of uncovering underlying UX issues to dig into deeper (perhaps in a “Moment in Time” type study).

Back to top

Go forth!

Researchers like me and my team are—like I'm sure you are—having to "prove" our ROI and business value more and more. Usability testing is a known quantity and a clear way we as experienced professionals can do that.

The more opportunities we as a discipline can take to meet a stakeholder need and demonstrate our expertise as business-value drivers, we should take.

What I've outlined here are ways to push on those boundaries a bit, imbuing a staid task-based usability practice with a qualitative followup or a more iterative approach. I know I said it’s not my favorite method, but as I wrote this I realized how powerful usability is when applied creatively, and am inspired to experiment more in my practice.

I hope this breakdown will inspire you and your team to get creative with usability, too!

Back to top

Ready to try out these usability approaches?

With dscout you can...

  • Recruit from a pool of quality participants
  • Run foundational, generative, and evaluative research
  • Conduct remote research on a lean timeline
  • Ensure your research drives organizational impact

And that barely scratches the surface.

See how dscout can not only support your next usability test, but become your all-in-one research tool. Schedule a demo!

Taylor is the Principal UX Researcher at dscout. She is a researcher conducting research with researchers to improve the experience of a research platform (say that three times fast).

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest