MVPs aren’t as risk-free as you think. Here’s why most still fail

Rob Calvert
Product Coalition
Published in
5 min readApr 1, 2020

--

Most of us in the startup community have embraced the principles of Lean product development and MVPs.

Despite this, most startups still flatline or fail after their MVP has been released.

That begs the question; why?

Startups are complex, so there’s many factors at play. Timing certainly comes into it. The quality of the team is important too. As is getting right balance between time spent on product development and developing your distribution.

But whatever the conditions, the first release of a product should get at least some traction.

Still, many don’t.

So why does it happen?

Taking the First Leap

Most teams use industry knowledge, entrepreneurial instincts and a dash of research to spot and verify a market opportunity.

And if a team is familiar with today’s best-practice frameworks, they’ll begin to ask questions such as:

  • What is the customer ultimately looking to achieve? (i.e. What jobs are they looking to get done?)
  • How else can customers achieve the same goals, and what are the pain points?
  • Who feels this pain the most? (i.e. Who are our early adopters likely to be?)
  • How might our customers expect the value we provide to be delivered and experienced?

Now what better way to answer these questions than defining a ‘simple’ version of the product, and bringing it to market quickly?

This is what many teams do. Consciously or not, they take an educated guess at a series of key questions, define an initial product and go-to-market strategy, and spend thousands of pounds (and months of effort) executing.

They’ve made assumptions, but they’re not blind to this. They know they could be wrong. But it’s fine, they say. “We’ll see what happens, learn, and iterate”.

Sounds reasonable, right?

At first, yes. But this approach brings major risks.

All Bundled Up

Until a product begins to get traction, most key decisions are an assumption. You have assumed that X is the right way to achieve Y, or that A is the right answer to B.

However good a team is, many (if not most) of these assumptions will be wrong. If they were right, more launches would be a success.

And crucially, because all assumptions are tested together, it is nearly impossible to understand which are wrong and which are right.

That’s because the first version of a product isn’t as simple as you think. It contains layers and layers of assumptions; from the core reason it exists, through to the usability of the product or wording of the first sales calls.

Unpicking this is — at best — costly and time-consuming. That’s because anything in this ‘assumption stack’ could be wrong.

“What do we need to change?”, the team asks. One person might have 2 or 3 ideas. A team of three might have 7 or 8 ideas between them. More? Maybe 10 or 15.

Finding the real answers is difficult enough. Add the emotion, sunk cost and hope that come with a launch, and it becomes exponentially more difficult.

This is precisely why growth curves for most startups are flat forever. They run out of money before they can unpick this core problem.

Unbundling

This is also why best-practice product methodology, pre-launch, runs as follows:

  • Set out your plan on paper (using something like a Lean Canvas)
  • Identify all key assumptions
  • Put them in priority order, from riskiest to least risky (leap-of-faith assumptions that are core to the business are the riskiest)
  • Do something (an ‘experiment’) to get a signal on whether this assumption is right or wrong
A diagram we’ve all seen so many times…

A simple principle, but there’s a couple of key elements to this that are worth emphasising:

  1. If you can, test assumptions in isolation. For example, a landing page does not just test your value proposition. You’re getting people to it somehow, so you’re testing your channels. You’ve detailed your solution or features too, so you’re testing that. And you’re probably targeting a particular audience, so you’ve made as assumption about who your early adopters are.
  2. You’re looking for signals, not definitive answers. Early signs that your assumption is wrong or right.

Whatever you do, don’t ask people what they want. Instead, get real people to take real actions. Getting your target customers to ‘commit’ (time, money, social capital, etc) is a better test of value.

(Creating great experiments is an article in its’ own right. Here’s a couple of articles to get you started.)

Using this approach, teams can begin to answer key questions with increased confidence, and increase the chances they’ll bring something to market that customers actually want.

Conversely, this approach can save thousands of pounds and months of effort by reducing the chance of bringing something to market that customers don’t want.

Which method do you prefer?

Practical Steps to Get Going

Processes and principles are great, but as we all know getting moving is what matters.

Here’s something simple you can do — whether you’re pre-launch, or you’ve launched but you’re struggling:

  1. Get your team together, and fill out a Lean Canvas.
  2. Individually, spend 10 minutes reviewing it, and write down all of your assumptions (i.e. anything you’ve written without a significant amount of data to back it up). Write them as statements. (e.g. “Customer segment X feels the pain of Y the most”).
  3. As a team, group them into themes, then individually (and privately) vote for the three riskiest assumptions.
  4. Come together again, aggregate scoring, and agree the assumption you want to test.
  5. Spend 10 minutes to individually sketch ideas for experiments to prove/disprove these assumptions. (The Real Startup Book is a good resource for the types of experiments you can run. I’d suggest someone creates a shortlist for inspiration, and shares them at the start of this exercise).
  6. Pick a winning experiment, agree success/fail criteria, and set it in motion.

You can treat this process exactly like you would an Agile sprint, with planning, a show and tell and retrospectives.

And if you’re getting really into it, you could setup an experiment board.

But for now, if this struck a chord, just aim to complete steps 1–6 above. Start testing the key assumptions of your startup before you bundle them all together and get hit by the classic post-launch pain.

--

--