Evaluative Research Design Examples, Methods, And Questions For Product Managers

Evaluative Research Design Examples, Methods, And Questions For Product Managers cover

Looking for excellent evaluative research design examples?

If so, you’re in the right place!

In this article, we explore various evaluative research methods and best data collection techniques for SaaS product leaders that will help you set up your own research projects.

Sound like it’s worth a read? Let’s get right to it then!

TL;DR

  • Evaluative research gauges how well the product meets its goals at all stages of the product development process.
  • The purpose of generative research is to gain a better understanding of user needs and define problems to solve, while evaluative research assesses how successful your current product or feature is.
  • Evaluation research helps teams validate ideas and estimate how good the product or feature will be at satisfying user needs, which greatly increases the chances of product success.
  • Formative evaluation research sets the baseline for other kinds of evaluative research and assesses user needs.
  • Summative evaluation research checks how successful the outputs of the process are against its targets.
  • Outcome evaluation research evaluates if the product has had the desired effect on users’ lives.
  • Quantitative research collects and analyzes numerical data like satisfaction scores or conversion rates to establish trends and interdependencies.
  • Qualitative methods use non-numerical data to understand reasons for trends and user behavior.
  • You can use feedback surveys to collect both quantitative and qualitative data from your target audience.
  • A/B testing is a quantitative research method for choosing the best versions of a product or feature.
  • Usability testing techniques like session replays or eye-tracking help PMs and designers determine how easy and intuitive the product is to use.
  • Beta-testing is a popular technique that enables teams to evaluate the product or feature with real users before its launch.
  • Fake door tests are a popular and cost-effective validation technique.
  • With Userpilot, you can run user feedback surveys, and build user segments based on product usage data to recruit participants for interviews and beta-testing. Want to see how? Book the demo!

What is evaluative research?

Evaluative research, aka program evaluation or evaluation research, is a set of research practices aimed at assessing how well the product meets its goals.

It takes place at all stages of the product development process, both in the launch lead-up and afterward.

This kind of research is not limited to your own product. You can use it to evaluate your rivals to find ways to get a competitive edge.

Evaluative research vs generative research

Generative and evaluation research have different objectives.

Generative research is used for product and customer discovery. Its purpose is to gain a more detailed understanding of user needs, define the problem to solve, and guide product ideation.

Evaluative research, on the other hand, tests how good your current product or feature is. It assesses customer satisfaction by looking at how well the solution addresses their problems and its usability.

Why is conducting evaluation research important for product managers?

Ongoing evaluation research is essential for product success.

It allows PMs to identify ways to improve the product and the overall user experience. It helps you validate your ideas and determine how likely your product is to satisfy the needs of the target consumers.

Types of evaluation research methods

There are a number of evaluation methods that you can leverage to assess your product. The type of research method you choose will depend on the stage in the development process and what exactly you’re trying to find out.

Formative evaluation research

Formative evaluation research happens at the beginning of the evaluation process and sets the baseline for subsequent studies.

In short, its objective is to assess the needs of target users and the market before you start working on any specific solutions.

Summative evaluation research

Summative evaluation research focuses on how successful the outcomes are.

This kind of research happens as soon as the project or program is over. It assesses the value of the deliverables against the forecast results and project objectives.

Outcome evaluation research

Outcome evaluation research measures the impact of the product on the customer. In other words, it assesses if the product brings a positive change to users’ lives.

Quantitative research

Quantitative research methods use numerical data and statistical analysis. They’re great for establishing cause-effect relationships and tracking trends, for example in customer satisfaction.

In SaaS, we normally use surveys and product usage data tracking for quantitative research purposes.

Qualitative research

Qualitative research uses non-numerical data and focuses on gaining a deeper understanding of user experience and their attitude toward the product.

In other words, qualitative research is about the ‘why?’ of user satisfaction or its lack. For example, it can shed light on what makes your detractors dissatisfied with the product.

What techniques can you use for qualitative research?

The most popular ones include interviews, case studies, and focus groups.

Best evaluative research data collection techniques

How is evaluation research conducted? SaaS PMs can use a range of techniques to collect quantitative and qualitative data to support the evaluation research process.

User feedback surveys

User feedback surveys are the cornerstone of the evaluation research methodology in SaaS.

There are plenty of tools that allow you to build and customize in-app and email surveys without any coding skills.

You use them to target specific user segments at a time that’s most suitable for what you’re testing. For example, you can trigger them contextually as soon as the users engage with the feature that you’re evaluating.

Apart from quantitative data, like the NPS or CSAT scores, it’s good practice to follow up with qualitative questions to get a deeper understanding of user sentiment towards the feature or product.

Evaluative Research Design Examples: in-app feedback survey
Evaluative Research Design Examples: in-app feedback survey.

A/B testing

A/B tests are some of the most common ways of evaluating features, UI elements, and onboarding flows in SaaS. That’s because they’re fairly simple to design and administer.

Let’s imagine you’re working on a new landing page layout to boost demo bookings.

First, you modify one UI element at a time, like the position of the CTA button. Next, you launch the new version and direct half of your user traffic to it, while the remaining 50% of users still use the old version.

As your users engage with both versions, you track the conversion rate. You repeat the process with the other versions to eventually choose the best one.

Evaluative Research Design Examples: A/B testing
Evaluative Research Design Examples: A/B testing.

Usability testing

Usability testing helps you evaluate how easy it is for users to complete their tasks in the product.

There is a range of techniques that you can leverage for usability testing:

  • Guerilla testing is the easiest to set up. Just head over to a public place like a coffee shop or a mall where your target users hang out. Take your prototype with you and ask random users for their feedback.
  • In the 5-second test, you allow the user to engage with a feature for 5 seconds and interview them about their impressions.
  • First-click testing helps you assess how intuitive the product is and how easy it is for the user to find and follow the happy path.
  • In session replays you record and analyze what the users do in the app or on the website.
  • Eye-tracking uses webcams to record where users look on a webpage or dashboard and presents it in a heatmap for ease of analysis.

As with all the qualitative and quantitative methods, it’s essential to select a representative user sample for your usability testing. Relying exclusively on the early adopters or power users can skew the outcomes.

Beta testing

Beta testing is another popular evaluation research technique. And there’s a good reason for that.

By testing the product or feature prior to the launch with real users, you can gather user feedback and validate your product-market fit.

Most importantly, you can identify and fix bugs that could otherwise damage your reputation and the trust of the wider user population. And if you get it right, your beta testers can spread the word about your product and build up the hype around the launch.

How do you recruit beta testers?

If you’re looking at expanding into new markets, you may opt for users who have no experience with your product. You can find them on sites like Ubertesters, in beta testing communities, or through paid advertising.

Otherwise, your active users are the best bet because they are familiar with the product and they are normally keen to help. You can reach out to them by email or in-app messages.

Evaluative Research Design Examples: Beta Testing
Evaluative Research Design Examples: Beta Testing.

Fake door testing

Fake door testing is a sneaky way of evaluating your ideas.

Why sneaky? Well, because it kind of involves cheating.

If you want to test if there’s demand for a feature or product, you can add it to your UI or create a landing page before you even start working on it.

Next, paid adverts or in-app messages like the tooltip below, to drive traffic and engagement.

Evaluative Research Design Examples: Fake Door Test
Evaluative Research Design Examples: Fake Door Test.

By tracking engagement with the feature, it’s easy to determine if there’s enough interest in the functionality to justify the resources you would need to spend on its development.

Of course, that’s not the end. If you don’t want to face customer rage and fury, you must always explain why you’ve stooped down to such a mischievous deed.

A modal will do the job nicely. Tell them the feature isn’t ready yet but you’re working on it. Try to placate your users by offering them early access to the feature before everybody else.

In this way, you kill two birds with one stone. You evaluate the interest and build a list of possible beta testers.

Evaluative Research Design Examples: Fake Door Test
Evaluative Research Design Examples: Fake Door Test.

Evaluation research questions

The success of your evaluation research very much depends on asking the right questions.

Usability evaluation questions

  • How was your experience completing this task?
  • What technical difficulties did you experience while completing the task?
  • How intuitive was the navigation?
  • How would you prefer to do this action instead?
  • Were there any unnecessary features?
  • How easy was the task to complete?
  • Were there any features missing?

Product survey research questions

  • Would you recommend the product to your colleagues/friends?
  • How disappointed would you be if you could no longer use the feature/product?
  • How satisfied are you with the product/feature?
  • What is the one thing you wish the product/feature could do that it doesn’t already?
  • What would make you cancel your subscription?

How Userpilot can help product managers conduct evaluation research

Userpilot is a digital adoption platform. It consists of three main components: engagement, product analytics, and user sentiment layers. While all of them can help you evaluate your product performance, it’s the latter two that are particularly relevant.

Let’s start with the user sentiment. With Userpilot you can create customized in-app surveys that will blend seamlessly into your product UI.

Easy survey customization in Userpilot
Easy survey customization in Userpilot.

You can trigger these for all your users or target particular segments.

Where do the segments come from? You can create them based on a wide range of criteria. Apart from demographics or JTBDs, you can use product usage data or survey results. In addition to the quantitative scores, you can also use qualitative NPS responses for this.

Segmentation is also great for finding your beta testers and interview participants. If your users engage with your product regularly and give you high scores in customer satisfaction surveys, they may be happy to spare some of their time to help you.

power-users-user-segments-userpilot-evaluative-research-design-examples
Use Userpilot segmentation to find beta testers.

Conclusion

Evaluative research enables product managers to assess how well the product meets user and organizational needs, and how easy it is to use. When carried out regularly during the product development process, it allows them to validate ideas and iterate on them in an informed way.

If you’d like to see how Userpilot can help your business collect evaluative data, book the demo!

previous post next post

Leave a comment