Of Chickens, Eggs and PMs

Sefi Keller
Product Coalition
Published in
4 min readNov 19, 2019

--

Here’s a scenario you might be familiar with. Say you’re starting a new role as a PM in a startup company (congrats!). You find that the startup has created a robust product with many different features and capabilities. Trying to gain some clarity and focus you ask: “which of those features are most/least used?”. Turns out, nobody knows. So you implement basic analytics mechanisms, wait for a week or two and then come back to the team equipped with a bunch of fancy graphs.

“I’m sorry to say it, but it seems like many of our features are almost never used”, you say as the team looks at you with confusion and disappointment. You suggest pulling the plug on the rarely used features and focus on the ones that users seem to like. “Wait”, says one team member, “isn’t that a chicken-and-egg kind of situation?”. “I mean, the features won’t be used if we don’t work to improve them, and we won’t work to improve them if they’re not used”. Interesting question, right? So, how did we get here and how do we get out of here?

As we all know, the agile approach was meant to improve the old waterfall approach and make us more efficient. According to agile, we’re supposed to (1) take the big waterfall style batches and break them into many smaller ones; (2) introduce “inspect and adapt” mechanisms between batches. The first piece of advice was widely adopted. The second, however, didn’t become common practice. Most chances are, the startup in the above scenario worked in two-week sprints but never stopped once to “inspect and adapt”. I’ll try to describe how inspecting and adapting, when done properly, makes the chicken-and-egg question redundant.

So what exactly should we inspect after each sprint? Feature adoption, usage and retention are all obvious answers, but only looking at those figures will lead us straight into the jaws of the chicken-and-egg problem. In order to effectively inspect and adapt, we should examine feature performance in light of 2 additional aspects — the business objective and our belief system.

Every feature should aim to serve one or more business objective. When Amazon suggests items you might be interested in, they aim to increase their average order value. When you decide what objective you’re trying to achieve, you can set a specific goal to it (e.g. increase average order value by 20%). When you’re set on an objective and a specific goal, it should then be possible to estimate the size of the opportunity (e.g. $10M annually). At the end of each sprint you should be able to ask yourself: Did the business objective move at all? How close are we to our goal? Is the opportunity big enough to justify another iteration or should we move on?

Whenever we decide to build something, we base it on our belief system — whether we’re aware of it or not. For example, when Amazon decided to build the “you might also like” feature, they probably believed that (1) they can predict what items users would want to buy based on their data; (2) users are spontaneous enough to add unplanned items to their cart. These beliefs provide the necessary framework to evaluate usage data and decide on next steps. Revisitng your beliefs will often lead you to the conclusion that the best way to hit your goal is actually not to improve the feature you’ve built, but rather to take a completely different approach. That’s what adapting is all about.

Now, let’s tie it all up. Say Amazon launched the “items you might be interested in”. A month later, the relevant PM at Amazon looks at the numbers and see very few people actually interact with the suggestions made to them. This is where the chicken-and-egg problem should arise, right? Users won’t interact with suggestions unless we’ll improve them and we won’t improve them unless users interact with them, right? Not today. Amazon’s talented PM ask herself: is what we’re trying to achieve here worth another effort? If the answer is “No”, we’ve already solved the chicken-and-egg problem, but let’s say the business objective is worth having another go at it. The PM now looks at the data and revisits her belief system. It seems users do end up buying items that were suggested to them, just not at the time the suggestion was made. That data supports belief #1 and weakens belief #2. The PM might deduce that the team shouldn’t spend its time improving the content of the suggestions, but rather the method in which they are presented to users.

I hope that clarifies how introducing business objectives and a belief system to your inspect and adapt phase, solves the chicken-and-egg problem. Whenever someone says “maybe users will use the feature more if we improve it?”, you can then ask back “they might, but is the business objective we’re trying to achieve here worth the effort?”. If the answer is yes, you can then ask “what assumptions did we have when built this feature and how do those assumptions fair in light of our current usage data?” The answer to those two questions will lead you out of the dark endless tunnel of “maybe it just needs another sprint of work?”.

--

--