Experiments that matter

Michael Furst
Product Coalition
Published in
4 min readJan 7, 2019

--

Slide from Andy Johns’ presentation hosted on Youtube by First Round Review https://youtu.be/OxNrMeRme0E

Large companies love mini optimizations; they use many product teams executing tiny experiments that make small optimizations which add up significantly over time and by the end of the year, these optimizations may triple users. However, these types of tests are time consuming, resource intensive and require many users to confirm statistical significance in optimizations this small. Smaller startups or external product partners like my team — constricted by runway and competitive pressure — usually don’t have time, resources or enough users, yet require large growth improvements with each test.

Companies under these circumstances have limited opportunities to grow, so each experiment they run needs to be incredibly thoughtful. Because my team and I place our value on the outcomes of the projects we engage with, we make sure to maximize the return on our time spent in order to generate the most return on investment for both our partners and ourselves. Through our experiences we’ve noticed two key opportunities for high-impact, rapid experimentation with the majority of our clients:

a) Radical Changes

b) High Base Conversion Rate Pages

Radical Changes

Small optimizations require large samples of users to be deemed statistically significant; because of this, we can deem radical changes statistically significant with a fraction of the users in a fraction of the time. Once built, if the test works the product reaps the rewards instantly and can be fully rolled out to the rest of your users until your next big test. If it radically fails, you can cut your losses just as quickly, reverting back to the standard having only wasted a few short weeks. Redesigning an entire page or an entire onboarding flow will head results almost instantly, but may be a risky use of development time and even losses if the test has significant impact in the unintended direction. Before conducting radical experiments, my team and I help our clients identify, prioritize and validate the most impactful tests to run with users in under two weeks ensuring we’re putting our engineering resources against validated tests with a high probability for positive impact.

High Base KPIs

Another way to quickly make a large impact, with slightly less design effort is through focusing on smaller optimizations at a place in the product where there is a larger base KPI. Base KPIs are usually a proxy for how deep in the funnel and engaged your users are; homepages generally have much lower base conversion rates than the checkout window, because users are more committed/engaged once they’ve added items to a cart.

In numbers, here’s what this might look something like; your homepage gets 10k users per day, and 300 (3%) get to the next step in the conversion funnel, it’s going to take time to understand a statistically significant (over 10%) change in a 3% base conversion rate on the homepage; the page will need many users to be sure your change is moving the needle to 3.3% conversion as opposed to some external factor influencing this small bump. But, if 300 users get to the next step in the funnel and 60 (20%) convert, it will be much easier to detect a statistically significant (10%) change at the new 22% conversion rate with fewer users. Of course, you can try a radical change here as well, but radical changes are a heavier lift. By focusing on the most critical point of the funnel where the users are most committed to using/purchasing your product, your changes will quickly have a more significant impact on the bottom line.

In some projects, we can implement these smaller changes with our front-end team while we design and validate our radical changes for some quick wins.

The bottom line

If you’re a Product Manager at a large company and have tons of users, time and resources to test with, a safe 1% week over week growth rate might be just what you need–I think most people would write a check to that. And if your company has a growth team improving a 1% week over week conversion rate by 5% every four weeks can be even better, greatly compounding over an entire year:

[If a company has 10 million users, and grows 1% week over week and grows that 1% week over week by 5% every 4 weeks, at the end of the year they would be at 30.5 million users.]

But we don’t have a year and we don’t have 10 million users to throw a 1% optimization test in front of. We need to be incredibly considerate about our tests. Often my team works with startups or large companies and we only have a few weeks or months to prove our value, so we focus on where it matters. A big company can run a bunch of half baked ideas and some of them might just work, but small companies or outsourced partners like us don’t have resources to run experiments that don’t matter.

If any of you have seen similar patterns to this or have had different experiences, I’d love to hear about it in the comments.

--

--