Finding the Truth Behind MVPs

A Successful Start

I learned about Minimum Viable Products like 99% of other Product Managers - through The Lean Startup by Eric Ries. When I happened upon the book and Eric’s method, I thought, “YES! This is what I’ve been searching for. This makes so much sense.” Testing products before you build them? What a novel idea! I was excited. I was energized! I was now going to build things that mattered to my customers.

Frankly, this came at the perfect time for me. I was tired of building products that no one used. Watching and waiting for the numbers to go up in Google Analytics, only to be let down again. It was getting old. My team and I spent months building products we thought would be successful, only to be disappointed. When I had the chance to try the MVP approach on a new product, I jumped on it.

The CEO of our ecommerce company approached me with a new product idea that was going to increase engagement and sell more items. He wanted to implement a Twitter-like feed so that the celebrities, who sold products on our site, could also post about their lives. This idea was prime for testing. It was so ripe with assumptions: “Do our customers want to hear about what are celebrities are up to directly in our platform? Will this sell more items or increase retention?”

I went to the engineers and asked them how long it would take to fully implement this idea from scratch, the way our CEO was proposing. With rough estimates, I went back to our fearless leader and told him “This is going to cost us $75,000 to fully build, and we’re not even sure our customers want it. I can prove in a week with $2,000 if this is going to move the needle.” Just like that, I had buy in.

Within one week, we proved this was a terrible idea. A week after that we found a different solution that increased clickthrough rate threefold on our products. We also doubled conversion rate. The whole company was hooked, and we were allowed to keep experimenting. “Great,” I thought, “everyone can see the value of this! It’s a no brainer.”

Eventually, I moved on to another job. I was excited about bringing the concept of MVPs to this team as well. Honestly, I was kind of shocked that everyone in that company wasn’t using them already. Did they just not know about this wonderful witchcraft? I was so confident that everything would go exactly as it had in my previous company.

“We Don’t Do That Here”

When the words “Minimum Viable Product” left my mouth for the first time, the reaction in the room was quite different than I expected. You would have thought I recited every curse word in the English dictionary. Like I just busted out a Biggie Smalls song in the middle of the meeting, not bleeping out any of the colorful lines. They responded as if I had offended their ancestors. Finally the CMO broke the silence with, “We don’t do that here. We don’t ship terrible products.”

Over the next few years I experienced this same reaction countless times.

I’ve learned that Minimum Viable Products are widely misunderstood. Some people are afraid to try building MVPs because of preconceived notions. Others use the word so much that it's lost all meaning. “We should MVP that!” has become a battle cry in product development that just means make it minimal, make it cheap, and make it fast.

How do people end up here? The story is almost always the same. Someone picked up The Lean Startup, had their mind blown, and said “We should do that here!” They saw a quick and cheap way to execute on a product without fully understanding what the purpose of an MVP was or how to make one well. In my particularly jaded company, a developer ended up creating a hack to test a new feature that broke every time someone went to use it. Customers were pissed. The company blamed it on MVPs as a whole, rather than sloppy development.

It’s not the MVP’s fault. The problems stem from misinterpretation of what an MVP is and from miscommunications along the way.

What is an MVP?

My definition of a Minimum Viable Product is the smallest amount of effort you can do to learn. When I teach this in workshops, I’m usually met with disagreement. “MVPs are the smallest feature set you can build and sell! Not just an experiment.”

So what is the truth? Is an MVP a product, a subset of a product, or just an experiment?

The Minimum Viable Product was first coined by Steve Blank and then made popular by Eric Ries in The Lean Startup. I went back to research how these two experts, and a few others, defined the term.

“The minimum viable product is that product which has just those features and no more that allows you to ship a product that early adopters see and, at least some of whom resonate with, pay you money for, and start to give you feedback on.” - Eric Ries, 2009

“Minimum feature set (“minimum viable product”) is a Customer Development tactic to reduce engineering waste and to get product in the hands of Earlyvangelists soonest.” - Steve Blank, 2010

“A minimum viable product (MVP) is not always a smaller/cheaper version of your final product.” - Steve Blank, 2013

“An MVP is not just a product with half of the features chopped out, or a way to get the product out the door a little earlier. In fact, the MVP doesn’t have to be a product at all. And it’s not something you build only once, and then consider the job done.” - Yevgeniy Brikman, Y Combinator, 2016

Confusing, yes? The one thing that was clear to me through this research is that the definition of an MVP has evolved. In the beginning, we talked about this concept as something to validate startup ideas. All these products were searching for product-market fit. I learned about the Concierge Experiment and Wizard of Oz in those days, which helped shape my definition and understanding. As I continued to use these methods as a Product Manager in enterprises and other more mature companies, I had to customize both my definition and the practice of building Minimum Viable Products. What I’ve learned is that you need both - the concept of experimenting and building a minimum feature set to be successful.

While there’s tons of dissent on the definition of an MVP, everyone pretty much agrees on the goal. The goal of an Minimum Viable Product is to rapidly learn what your customers want. You want to do this as quickly as possible so you can focus on building the right thing. So let’s get rid of the buzzword and focus on that premise. Let’s stop arguing about what an MVP is and start talking about what we need to learn as a company.

How and When to Learn

When we start off building a new feature or product, there are a million questions to answer. “Is this solving the customer’s problem? Does this problem really exist? What does the user expect to gain with the end result?” We have to find the answers to these questions before committing ourselves to building a solution.

This is why starting with a minimum feature set is dangerous. When you jump into building a version one of a new product or feature you forget to learn. Experimenting helps you discover your customer’s problems and the appropriate solutions for them by answering these questions. It also doesn’t end with just one experiment. You should have multiple follow-ups that keep answering questions. The more you answer before committing yourself to the final solution, the less uncertainty there is around whether users will want or use it.

Once you have proven that a user wants your product, it’s time to investigate a minimum feature set. Now we can start to find a product that is marketable and sellable, but also addresses the user’s needs that were uncovered through experimentation. Delivering this product to market as fast as possible is the ultimate goal, so you can get feedback from customers and iterate. But, you have to be careful to deliver a quality product, even if it’s tiny. Broken products do not produce value for your customers, only headaches. Any version of a product that does not deliver value is useless.

How does this look in practice? In one SAAS company I was working at, we had to create a new feature that would help our customers forecast their goals. We were given input by the customer’s sales team from their conversation with the customer. After reviewing the information, we knew we had to learn more.

We met with the customer to understand what they were looking for in this forecaster. Once we thought we had a good grasp of the needs, we built them a spreadsheet and dumped their current data into it. This took us less than a week. We presented the spreadsheet to the customer and let them use it for a week before getting their feedback. We didn’t get it right the first time, or the second, or the third. But, on the fourth iteration, we were able to deliver exactly the results the customer was looking for. We did the same process with a few other customers to make sure this scaled.

While the spreadsheet was providing immediate value to some of our customers, we didn’t have the resources to do it for all of them. So, we had to build a software solution. We started exploring the Minimum Feature Set, using the feedback we received on the spreadsheet. There were plenty of other bells and whistles the customers wanted, but we paired it back to the essentials in the first version. We spent a few weeks getting the first version to work and include the most important pieces. Then we turned it on for the clients with the spreadsheet to get their feedback. After iterating a few times, we began selling it to others.

This process is will help your company find problem-solution fit. If you are creating a new feature or a new business line that solves a different problem for your user, this method can help ensure you’re building the right thing for your customer. But what if you have a mature product and are not starting from scratch?

Experimenting in Enterprises

Many enterprises today are introduced to the Minimum Viable Product by consultancies who propose creating an entirely new product from scratch. This is may not be the best idea. When your company already has product-market fit, you have already built a product that customers are using. You do not need to rebuild your product, you need to improve it. The methods should to be adapted for this case.

Something that sets these two methods apart is the goal. When searching for product-market fit, you want the user to adopt and probably pay for your product. This is not always the goal when improving existing products. It could be to improve retention or increase engagement with certain parts of your product. Whatever you decide your goal is, it should be clear to the team and informing all their decisions.

Once the goal is clear, you again focus on learning. What do you need to learn before committing to a solution. Write out these hypotheses on what you think will move the needle, and then design a Product Experiment to test it. You don’t have to create an entirely new product. Maybe you will find out a new product is necessary while experimenting, but that should not be the end goal.

A company I coached wanted to increase their conversion rate across the site. They already had a popular ecommerce subscription product with thousands of users. Traffic was coming in, but users were not converting as much as projected. Nothing had moved the needle when it came to offer testing. They dug into the sources of where great customers were coming from and found that many came through referrals. But, only a few people were actually sending referrals. Through user research we discovered two main reasons why: they didn’t know they could actually send referrals, and they were not sure what the referral gave their friends.

The team decided to tackle the first problem with the hypothesis, “If we let users know clearly that they had referrals available, they would send them.” The first experiment involved showing a pop up that encouraged users to send referrals when they next logged in to the site. Referrals sends went up 30%, leading to an increase in conversion rate! They did not have to implement a whole new program here; they just had to create ways to make it visible.

This team continued to dive into problems around conversion. They learned that the top three problems on the direct-to-site experience were:

  1. Customers were not sure how the service worked.
  2. Customers wanted to know what specific items came in the subscription.
  3. Customers were not sure why the product was priced higher than competitors.

The next step to solving this problem was to see if they could deliver value and learn at the same time. They created the hypothesis, “If we give users the information they are searching for in the sign-up flow, they will convert more.”  They also wanted to learn which questions people would click on most to see which problem was the strongest. They planned a simple way to get people the information they needed while signing up: adding a few links into the sign-up flow, echoing the questions back to users. When the links were clicked, it showed a pop up  explaining the answer to the question. At the end of the week of building and testing, they could see the experiment increased conversion rate closer to our original goal. They also learned that showing the information about what exactly came in the subscription was the most important thing. The team continued to learn what was preventing prospects from signing up, and systematically answered those questions through experimentation.

Caution Ahead

One mistake companies make when dealing with Product Experiments is keeping them in play once you have learned. These features then break and cause problems for your users. You are designing to learn and move on, not to implement something that will last forever.

With the team above, they learned that the information they provided was helping prospects answer their questions, but not enough people were seeing that solution. After experimenting more, they learned that a more robust solution would be needed.

It was time to start planning a sustainable solution that incorporated the learning from the experiments. Moving away from Product Experiments to the next phase is not an excuse to stop measuring. This team was still releasing components in small batches, but those batches were complete with beautiful design and a more holistic vision. After every release, which happened biweekly, they would measure the effect it had on conversion and test it in front of customers. The feedback would help them iterate towards the product that would reach the conversion rate goal.

Chris Matts has eloquently named this the Minimum Viable Investment. He’s also pointed out that you should not only be looking at improving your user facing products, but the infrastructure that helps you create those products quickly. The team above is also improving their site architecture to experiment faster while waiting for test results. I introduced the Product Kata to teams improving their products to help them find structure through Product Experiments and Minimum Viable Investments.

Learning is the Goal

One of the scariest parts of this process for companies is releasing things that are not perfect. It’s important to balance good design with fast design and good development with fast development. The best way to do this is getting UI designers and developers to pair together. After defining what the goal is for the iteration or experiment, sit down together and talk through ideas on how to execute. If we design in a slightly different way, is it just as useful to the user, but easier to build? Prototype together. Sketch together. Work side by side and talk about trade offs the whole time. This is how good teams move quickly and avoid rework.

By learning what the user wanted early, I’ve avoided countless hours of rework and throwing out features all together. This is why it’s so important for teams to experiment, whether they are B2B or B2C. Give the product teams access to users. I’ve seen fear in companies that their employees will say or do things that will upset users. If you teach your product teams the right way to communicate and experiment, this will not happen. Train the teams in user research. Don’t release experiments to everyone. Create a Feedback Group with a subset of users. Build infrastructure so you can turn on experiments and features just for smaller group. These users will guide you to create features that will fit their needs.

I dream of a day when I can walk into a company and mention “MVP” and not hear, “We don’t do that here.” While the definition of Minimum Viable Product may work us into a tizzy, the goal behind it is extremely valuable for product companies. If you’re having trouble implementing these practices inside a company, try leaving out the buzzwords. Use terms like experimenting and focus on the premise. Learning what your users want before you build it is good product development. Make sure when you do invest in a feature or solution, it’s the right one.

This post was originally published on InfoQ on April 18, 2016.

Melissa Perri

I am a strategic advisor, author, and board member that works with leaders at Fortune 500 companies and SAAS scale ups to enable growth through building impactful product strategies and organizations. I’ve written two books on Product Management, Escaping the Build Trap and Product Operations. Currently, I am the CEO and founder of Produx Labs, which offers e-learning for product people through Product Institute and CPO Accelerator. I am a board member of Meister, board advisor to Labster and Dragonboat, and a former board member of Forsta (acquired by Press Ganey in 2022). Previously, I taught Product Management at Harvard Business School in the MBA program. I’ve consulted with dozens of companies to transform their product organizations, including Insight Partners, Capital One, Vanguard, Walmart/Sam's Club. I am an international keynote speaker, and host of the Product Thinking Podcast.

https://linkedin.com/in/melissajeanperri
Previous
Previous

Ignoring Innovation: Lessons from Kodak

Next
Next

Changing the Conversation about Product Management vs. UX