down arrow
Keep Your Website Usability Testing Focused and Results-Driven: Best Practices and Common Mistakes

Keep Your Website Usability Testing Focused and Results-Driven: Best Practices and Common Mistakes

We highlight best practices in website usability testing (from high-fidelity prototyping to selecting participants) and mistakes to avoid.

Rather than write another comprehensive post on all the steps it takes to conduct website usability testing, we wanted to share some advice on how to do it well — and what to avoid along the way.

Sometimes people conflate usability testing with user experience research as a whole, but usability testing is just one of many ways to promote good UX. Website usability testing works to test one or two specific things on your site at a time, not the overall user experience.

To put it in a different context, you wouldn’t turn to usability testing to figure out if drivers love a new make and model of a car, but you might use it to determine if the process of increasing windshield wiper speed is intuitive.

Usability testing reveals the ease of use and level of intuitiveness of a specific function or step on a website. It can help uncover usability issues like confusing navigation, unclear language, and more. 

We’ve already written a comprehensive guide to qualitative usability testing, so we won’t rehash everything here. But we did want to explain some of the techniques we use to conduct website usability testing that results in helpful, actionable feedback. We’ll discuss them in the context of these five steps:

  1. Define the purpose of your test as narrowly as possible
  2. Create high-fidelity prototypes designed with red herrings
  3. Source your participants by user behavior, not by demographics
  4. Create an open and comfortable testing environment
  5. Analyze the results and implement the changes.

In this article, we’ll go over each step in more detail, highlighting the best practices we use (from high-fidelity prototyping to how we analyze user feedback) as well as the common mistakes you’ll want to avoid in your next usability test.

Note: Looking for a specific audience to participate in your website usability testing? User Interviews offers a complete platform for finding and managing participants. Tell us who you want, and we’ll get them on your calendar. Find your first three participants for free.

1. Make the Scope of Your Usability Test as Narrow as Possible

It may sound obvious, but it’s worth repeating: To conduct website usability testing, you must have something specific in mind you want to test. Website usability testing isn’t a process where you go in with a vague concept and let the method change with the test. 

Maybe you want to test sign-up button placement on your front page. Maybe you want to test how user-friendly it is for customers to add products to their cart and then continue to shop. Whatever it is, make sure you’re able to clearly state what you’re testing before you start designing prototypes and sourcing participants.

For example, at User Interviews, we wanted to give researchers the ability to edit screener surveys after they had launched their projects (something originally only available to our project coordinators). So, as part of the design and product development process, we conducted usability testing to evaluate how intuitive it was for researchers to edit their screeners. 

The User Interviews dashboard when you are viewing your Screener surveys.
What it looks like when you view your screener in User Interviews.


We knew that if a researcher edited their screener, a new version would be created automatically after hitting Save. The specific questions we had were: 

  • Was this process intuitive? 
  • Were the researchers aware of the new version created after hitting ‘Save’? 
  • Did this create double the work? 
  • Was this effective to the researchers’ end goal?

When we were testing how researchers would respond to the edit feature of the screener, we knew what was and wasn’t a focus, so we knew exactly what screens were needed when it came time to prototype.

  • What wasn’t a focus: the overall effectiveness of the screener, the screener survey builder, or the project creation flow. 
  • What was a focus: how researchers use the edit function and how they respond to the automatically copied save when editing a screener.

Use Collaboration to Keep the Test on Track

Who at your company will be affected by the changes you make to a certain page or feature? We recommend keeping them updated throughout any user research process. Facilitate collaboration and regular team check-ins. Failure to do so can result in otherwise avoidable mistakes.

In the past, one of our product design team members worked on usability and user testing for a real estate property management platform for independent landlords. The property platform was testing to see if changing the language on a specific part of the ledger balance made paying rent more effective.

A photo of an apartment complex.
One of our team members witnessed the importance of keeping all stakeholders in the loop during a previous position. A communication oversight led to duplicate rent payments after a usability update went into effect.


The actual testing went well, but the testers weren’t actually paying rent. When they rolled out an updated taxonomy based on the test results, it created a new set of problems where actual tenants were scheduling duplicate payments.

In retrospect, this unforeseen complication could have been averted by being more inclusive and collaborative throughout the entire testing process. If the engineers and customer support team members were shown the prototype, they might have realized submitted rent payments weren’t going to show up in the UI — because that job doesn’t run until eight hours later, creating a gap between what the user understands to be happening and what is actually happening.

Exploring a website’s usability problems can help lead to well-informed design decisions, but the results you get are directly tied to your usability testing methods and how well the test is designed to simulate a real-life experience as navigated by real people. Starting with a focused, well-defined goal and keeping the right people involved helps set your usability study up for success.    

2. Use High-Fidelity Prototypes for More Actionable Results

Whether you use high-fidelity or low-fidelity prototypes depends on what part of the process you’re in and what questions you’re trying to answer.

If you’re vetting a new idea or concept, low-fidelity prototypes can be enough to get you started. But for late-stage testing that results in viable user feedback, we almost always recommend high-fidelity prototypes.

Low-fidelity prototypes just don’t give as much chance for distraction, and without built-in distractions, the test doesn’t accurately depict how users will act in real life. High-fidelity prototypes give your users the freedom to test against the intended design, to complete tasks (or fail to complete tasks) in ways you never expected, and that’s key. 

If you're using a click-through prototype in Invision, Figma, or Sketch, testers can see which parts of the screen are clickable. And if you only have things that are clickable as a part of your task, then it’s like lighting the way out of a maze for your participants: They need to be able to find their way through without your help. And if your test doesn’t allow them to navigate naturally, the results may not be as insightful as they could be.

We address this concern by making prototypes that include secondary and tertiary features that are are not necessarily part of the test, such as a Preview feature next to a Save button. These red herrings can help you see where users may get distracted or confused.

A sample prototype for the screener editing process.
Prototyping for the screener editing process.

When making your prototypes, figure out what screens you need to facilitate the task as well as potential flows or breaking points in the process. Give the user more than one way to complete the task. Don’t simply map your task in steps in the prototype; what you want the user to do needs to be possible in the prototype, but it shouldn't be the only possible action.

For example, at User Interviews, we currently have one way for our customers to create a new project. They simply click the button in the top right corner of the navigation bar. If we wanted to test a new flow, we might test adding a button at the bottom of the page while keeping the one on the navigation bar the same.

With more than one way to complete the same task, you can see which process in the available information architecture is more intuitive.  

3. Source Your Participants by User Behavior, Not Demographics

The closer your usability testing scenario gets to real life, the better. High-fidelity prototypes with red herrings help create a life-like scenario. Similarly, when it comes to screening participants, it’s important to focus on user behavior above demographics. 

When we wanted to let researchers edit their screeners, we knew we had to test this function with actual researchers. Testing this new feature with our project coordinators (who knew this feature already) would be pointless, as would testing the functionality of the ‘edit’ button with people who were never going to recruit research participants

A sample of the dashboard where researchers see who qualified for their study and which version of the screener they completed.
In the participant list, researchers see who qualified for their study and which version of the screener they completed.


Luckily, we had easy access to our own existing customer base. We ran our usability test with five different researchers to understand their process and learn what issues were causing the researchers to edit their screeners in the first place. 

But it isn’t always so simple to source participants. Perhaps there’s red tape preventing you from contacting your own users, or maybe your existing user base isn’t the target audience for your new product. Whatever the case may be, there are plenty of best practices for finding good research participants.

Plus, User Interviews allows you either to recruit participants for your testing (whose user behavior matches real users) or to centralize your communication with your own user base (with CRM and automation tools built in to help you keep track of who’s participated in what test and when). 

Give it a try with your first three participants free.

4. Create an Open and Comfortable Testing Environment

You want your testers to behave as they normally would, but you also want insight into their thought process during the test. Whether the test is done in a usability lab or you’re conducting remote usability testing, the main focus is creating an environment where participants are comfortable describing their experience.

One of the challenges we’ve faced is getting our testers to talk aloud as they complete a task. Narrating every action doesn’t come naturally to many people, but if participants aren’t explaining their choices, you lose a lot of valuable information. Creating a natural environment where participants feel comfortable to share their thought process in real time is a huge win for your usability test.

Just make sure you’re encouraging them to speak without asking leading questions that prompt them to say what you want to hear.

For example, say you want to know how someone will interact with a feature such as editing a screener survey. You might think to ask, “Do you have any issues with editing your screener?” This assumes, albeit subtly, that they either need to edit their screener survey or have issues doing so. In this scenario, you’re pushing the participant to say something you want to hear.

To avoid leading the participant, you could rephrase your question and say, “Tell me about your experiences with writing and using screener surveys in your research studies.”  In this scenario, you’re now asking the participant to reflect on their own process and experiences. They might share what you want to hear — or maybe not — but at least you know it came from them rather than from an attempt to say what they think you want.

5. Analyze the Results and Implement Changes

Some of the results from website usability testing will be pretty straightforward. Otherwise, it will take dedicated analysis to pull out meaningful insights.

In our usability test to let researchers at User Interviews edit screeners after launching a project, we learned immediately that the process of editing screeners was intuitive. That said, we still found that there was a level of disconnect in the language we used to convey that the screener was saved. This gave us a specific area where we could improve usability. 

Edit Screener Survey pop-up clarification


When you’re reviewing your session data, it can be helpful to start by grouping insights by category. It’s a good jumping-off point to identify problems and start exploring solutions.

All five of our steps revolve around the simple premise of making your website usability testing focused and nuanced, so the results you get will mirror (as closely as possible) the results you’ll get when changes are implemented.

Note: Looking for a specific audience to participate in your website usability testing? User Interviews offers a complete platform for finding and managing participants. Tell us who you want, and we’ll get them on your calendar. Find your first three participants for free.
Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
[X]
Table of contents
down arrow
lettuce illustration against an abstract blob filled with a teal brushstroke texture

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
[X]