Best Practices

Product Management 101: A/B Testing

A data-driven product manager wants to ensure that each piece of his or her product is working as effectively as possible. That means regularly tweaking certain aspects of the product in order to improve their overall performance. One way to do this is through A/B testing.

What is an A/B test?

An A/B test (also referred to as a “split test”) compares two different versions (A and B) of something to see which one drives the best outcome. The “something” could be a landing page, a logo, a color choice, an in-app guide design, a piece of product messaging, etc. It could even be an entire user interface or feature update. The desired outcome might be more user engagement, a higher conversion rate, more button clicks, and so on. It all depends on the purpose of the variations being tested. An A/B test presents a subset of users with the “alternative” version of the item being tested. Then, the test uses statistical analysis to determine the version that performs more effectively.

What are some use cases for A/B testing?

PMs can use A/B testing to compare the performance of just about any aspect of their application. Common examples include:

  • Landing pages
  • Site layout
  • User interfaces
  • Call-to-action text
  • CTA button color and/or placement
  • In-app guide messaging
  • Product messaging

Within each of these categories, there are a number of variables a PM might want to test or at least keep in mind as they analyze the results.

How do PMs run an A/B test?

The A/B testing process basically follows the scientific method. First, the PM comes up with a hypothesis they’d like to test. For example, they might want to know which version of a specific in-app guide receives more clicks. When developing the hypothesis, the PM should specify the data they’ll be collecting during the analysis (clicks, engagement, bounce rate, etc).

Next, they set up the test itself. To do this, they could use any number of A/B testing tools, such as Optimizely, AB Tasty, Inspectlet, and LaunchDarkly. These tools are particularly useful in that they can help the PM define key metrics, decide how long to run the test, segment which users will see which variant, and determine when statistical significance is reached.

The test then runs for the specified amount of time or until a statistically significant result is obtained. Once the test is over, it’s time to analyze the data. Sometimes, the “winner” of the test is very clear. But more frequently, the end result is a bit fuzzy. That’s not necessarily a bad thing. After all, an insignificant/inconclusive test result still offers valuable information about how users are or aren’t interacting with the variants being tested.

Are there different types of A/B tests?

Yes, PMs have a few different types of A/B test methodologies at their disposal. The traditional A/B test compares two different versions of the item being tested. However, another option is the A/B … N test, which compares more than two variants. Multivariate testing is similar but involves multiple combinations of the different variables being tested. And a PM can test variations of a single element across multiple pages by using multipage testing.

What are some of the benefits of A/B testing?

One of the pros of A/B tests is that they are easy and inexpensive to run. There’s no shortage of A/B testing tools on the market, many of which offer low-cost versions and free trials. Also, an A/B test mathematically measures how a user interacts with the variants. Therefore, its results are far more data-driven than those of a customer survey or user focus group.

Recommended reading

“The Four Keys to Data-Informed Product Management” by Rekha Venkatakrishnan

Is your team building a data-informed product? Learn four key ways to use product data, including A/B testing results, at various stages of the product development lifecycle.

“9 Mistakes to Avoid While A/B Testing” 

This section of VWO’s extensive overview of A/B testing focuses on the many potential pitfalls PMs can face when getting started with these tests. These include running the test for too long, tracking the wrong success metrics, and testing too many variables at once.

“What Every Product Manager Should Know About A/B Testing” by James Copeman

This one is a definite must-read for all PMs, no matter their prior experience with A/B testing. It goes over the basics, shares common use cases, and offers advice for running successful tests.