Skip to content
Ideas

11 Usability Testing Metrics to Enrich Your Reporting

Use these metrics the next time you need to support your qualitative insights with quantitative data.

Words by Nikki Anderson-Stanier, Visuals by Nicole Antonuccio

Many user researchers—especially those who focus on qualitative methods—are often asked about quantifying the user experience. Whether we like it or not, oftentimes our stakeholders want numerical data to supplement quotes, video, or audio clips.

Qual user researchers, including myself, frequently look towards surveys to help add that quantitative spice. However, there's much more to quantitative data than what we glean from mere surveys. Frequently, surveys are not the ideal methodology to mix.

I quickly learned that a lot of our most useful metrics come from adding a layer of usability testing

If someone commented on the inability to make conclusions with qualitative data, I could turn around and show them usability metrics. A whole new world opened to me in the marriage of qualitative and quantitative data. I could show statistics, benchmark, talk about confidence intervals, compare products and services to competitors. The world was my UX oyster.

As I've grown in my career, I highly value these metrics, and encourage UXRs to find ways to bring qualitative and quantitative data together as a holistic view of an experience.

Jump to:


Save time and evangelize research all at once. Read: How to Train Your Team to Usability Test in an Afternoon


What metrics should I be using?

Usability testing metrics generally fall under three different types:

✔ Effectiveness

Whether or not a user can accurately complete a task that allows them to achieve their goals. Can a user complete a task? Can a user complete a task without making errors?

✔ Efficiency

The amount of cognitive resources it takes for a user to complete tasks. How long does it take a user to complete a task? Do users have to expend a lot of mental energy when completing a task?

✔ Satisfaction

The comfort and acceptability of a given website, app, or product. Is the customer satisfied with the task?

    The best way to determine which tasks to use is with the three categories above. I use these categories to decide what I would like to learn from the usability test.

    Do I need to learn about how efficient a product is? If so, then I want to prioritize measuring time on task or number of errors. I always ask myself, "What is it that I want to learn from the usability test that would enable my team to improve and enhance our product?"

    7 task metrics for high-priority problem areas

    Using a combination of these metrics can help you highlight high priority problem areas. For example, if participants respond with high confidence that they successfully completed a task, yet the majority are failing, there is a vast discrepancy in how participants are using the product, which can lead to problems.

    1. Task success

    This simple metric tells you if a user was able to complete a given task (0=Fail, 1=Pass). You can get more fancy with this once, and assign more numbers that denote the difficulty users had with the task, but you need to determine the levels with your team before the study

    2. Time on task

    This metric measures how long it takes participants to complete or fail a given task. This metric can give you a few different options to report on, where you can provide the data on average task completion time, average task failure time, or overall average task time (of both completed and failed tasks).

    3. The number of errors

    This task gives you the number of errors a user committed while trying to complete a task. You can also gain insight into common mistakes users run into while attempting to complete the task. If any of your users seem to want to complete a task differently, a common trend of errors may occur

    4. Single Ease Question (SEQ)

    The SEQ is one question (on a seven-point scale) that measures the participant's perceived ease of a task. Ask the SEQ after each completed (or failed) task.

    5. Subjective Mental Effort Question (SMEQ)

    The SMEQ allows the users to rate how mentally tricky a task was to complete.

    6. SUM

    This measurement will enable you to take completion rates, ease, and time on task and combine it into a single metric to describe the usability and experience of a task

    7. Confidence

    Confidence is a seven-point scale that asks users to rate how confident they were that they completed the task successfully.

    5 metrics for questionnaires

    Here are five metrics you could use when measuring your questionnaires. For example, you can ask a user how satisfied they are with your product or brand on a scale of 1-5, with 1 being the not satisfied and 5 being very satisfied.  

    1. System Usability Scale (SUS)

    The SUS has become an industry standard and measures the perceived usability of user experience. Because of its popularity, you can reference published statistics (for example, the average SUS score is 68).

    2. SUPRQ

    The Standardized User Experience Percentile Rank Questionnaire is ideal for benchmarking a product's user experience. It allows participants to rate the overall quality of a product's user experience, based on four factors: usability, trust and credibility, appearance, and loyalty.

    3. SUPR-Qm

    The Standardized User Experience Percentile Rank Questionnaire is for the mobile app user experience. Researchers administer it dynamically using an adaptive algorithm.

    4. Net Promoter Score (NPS)

    The Net Promoter Score is an index ranging from -100 to 100 that measures the willingness of customers to recommend a company's products or services to others. It gauges the customer's overall satisfaction with a company's product or service, and the customer's loyalty to the brand.

    5. Satisfaction

    You can ask participants to rate their level of satisfaction with the performance of your product or even your brand, in general. You can also ask about more specific parts of your product to get a more fixed level of satisfaction.

      Compare your product to others

      After you've completed your usability test and asked the above metrics, there are some ways to compare your results to other companies. There are general benchmarks companies should aim for or should try to exceed.

      Below are examples from MeasuringU:

      • Single Ease Question average: ~5.1
      • Completion rate average: 78%
      • SUS average: 68
      • SUPR-Q average: 50%

      You can also find averages for your specific industry, either online or through your benchmarking analysis. For example, this website could show you how your NPS compares to other similar companies.

      Finally, you can compare yourself to a competitor and strive to meet or exceed the key metrics mentioned above. Some companies have their parameters online so that you can access their scores.

      If not, you'll have to conduct a benchmarking study on your competitors. When comparing against a competitor, always remember you don't really know their customer's satisfaction with their product. Make sure to keep that in mind when drawing conclusions.

      An example

      There are a lot of different metrics above, and it's impossible to squeeze them all into one usability test, so I encourage picking a few that are most important and relevant to your product.

      Below is an actual example I have used in the past. In this, I know participants must find a suitable result as quickly as possible when booking travel, so I focus on efficiency and effectiveness.

      Task: Imagine you were going on vacation from Berlin to Munich from March 14th to March 17th. Go to www.fromAtoB.com and search for that criteria.

      • When the user begins: start the timer
      • Count the number of errors, if any, and record them at the end of the task
      • Continue the timer until:
        • The user completes the given task
        • The user indicates they would give up (failed task)
        • Record task success
      • Ask the user the SEQ:
        • Overall, on a scale of 1-7, one being very easy to complete and seven being very difficult, how did you feel about this task?
      • Optional: ask the user if anything was missing or confusing, especially if the user failed
      • Move on to the next task

      A/B testing: A very powerful experimentation tool

      A/B testing is a method of comparing two versions of a web page or app against each other to determine which one performs better. You can use A/B testing to determine which variation performs better for a given conversion goal.

      For example, an A/B test you might run would be a live-site study in which you manipulate elements of the pages that users see.

      A typical A/B test involves posting two alternative designs for a given page, or features on a page. They can show you if users are interacting more with one version or the other. They’re also a great way of continually testing small changes in the UI or UX to see if there are impactful results.

      How to run an A/B test

      • Pick one variable to test​
      • Identify your goal (hypothesis)​
      • Create a control (version A) and challenger (version B) ​
      • Split sample groups equally and randomly​
      • Determine your sample size (at least 1000 participants)​
      • Use an A/B testing tool​
      • Test both variations simultaneously ​
      • Test for as long as it takes to obtain your sample size (or longer)​
      • Look at the results through the A/B test calculator ​
      • Follow-up with users

      Conclusion

      Metrics vary depending on each product, and there is no one-size-fits-all UX analytics metrics model. The best thing you can do is experiment with different metrics to see which ones are best to help your team make better decisions

      Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


      To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

      Subscribe To People Nerds

      A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

      The Latest