Comparing The Top Five Prioritization Methods

Flow Bohl
Product Coalition
Published in
3 min readFeb 5, 2022

--

Many product managers, including me, don’t always know if their work will lead to the desired outcome. In these moments it’s helpful to gauge the sentiment of internal stakeholders and customers to prioritize a messy backlog. The following are five popular prioritization methods and my view of their usefulness in varying situations.

Prioritization method comparison chart
Comparing prioritization methods

1. RICE

Brief summary

The model uses four simple metrics based on personal estimates: Reach (number of users), Impact (3, 2, 1 or 0.5), Confidence (0% to 100%) and Effort (total man weeks invested). The final score is calculated by multiplying Reach x Impact x Confidence and dividing it with effort.

My view

My favorite model because it typically results in a very simple number that can be obtained for each feature individually. In other prioritization methods you need the benchmark of other features to assess their relative value. Sometimes it’s difficult to agree on the definition for each metric and therefore scores may vary over time.

2. Value vs. Complexity

Brief summary

This easy to use model scores business value and technical complexity from high to low based on the personal assessment of the participants. Value is divided by complexity to get the relative priority.

My view

Although not my favorite, I used “value vs complexity” the most for its simplicity. As each item is always scored relatively higher or lower in comparison to other items on the list, single items will rarely receive the same score. That’s great, but not so useful when you just need to score an individual item to add to a long list.

3. Kano Model

Brief summary

Each item to be prioritized is rated on the two dimensions customer satisfaction and functionality. In satifaction the score goes from worst or “frustrated”, to “indifferent”, and to best or “delighted”. In functionality the score goes from basic or “expected”, to “normal”, and to attractive or “exciting”.

My view

This model uses real customer insight to arrive at each of the scores, which is best practice for data driven product decision making. The aim is to remove company internal bias for what is “useful” to users, however the score is still somewhat subjective based on each users’ interpretation of the feature in question. Theoretically a high sample rate is needed to limit subjectivity. This can be quite time consuming and not feasible in the everyday life of a product manager.

4. MoSCoW

Brief summary

This model helps to create four simple list where each item will be either within must have, should have, could have or won’t have. Won’t have items clearly out of scope.

My view

This is a good model to visualize to participants which items are out of scope and also document where each item is at a discovery phase. Over time the perception might change for what once was considered “could have”. The model doesn’t help to easily quantify the position within each category, but good enough for me to get an overview of what an MVP could look like.

5. ICE Scoring Model

Brief summary

Similar to RICE, but much easier to use, this model compares Impact, Confidence and Ease on a scale from 1 to 10. Higher is better. Use this spreadsheet as a template.

My view

ICE is certainly the model with the best name and great for a quick prioritisation session, but lacking a bit in objectivity compared to the other models.

Prioritizing doesn’t always require rigid methods or frameworks, but I found that collaboration across stakeholders and departments for prioritization increases the transparency for what happens when. It also allows to enhance the understanding for non-product team members why some features are released later or even never.

--

--

Dreamer and doer. Product manager in financial data, London, ex @UBS @Bloomberg