Louise Cantrill (Head of Consultancy at Tarigo and one of the sponsors of the Cambridge Product Management Network) writes an excellent article on the various Methods to Prioritise User Stories.
All prioritization techniques attempt to rate all the possible projects. Doing the high value / low cost projects are a no-brainer, of course. And you don't need a fancy model to tell you these big wins from the rest. It's what happens when you've done all the obvious ones - that's where it gets hard.
These techniques are supposed to make a comparison between big scary projects that are high risk, tie up lots of resources, but create new exciting opportunities versus lots of small improvements that are (relatively) quick to realise, benefit a sub-set of customers or make small, incremental improvements in consistency (for example).
All of these techniques have their advantages and will give you a ranked order of projects, based on biases that each technique injects. I have delivered my own prioritization technique - see Feature Prioritization and Roadmapping.
Whatever the results, you can view them as militant instructions or mildly useful recommendations. Nothing trumps a CEO who has just had a berating from a key customer or his dreamy conversation with a visionary at a conference!!!
1. RICE Scoring
|the percentage of users or customers who will be affected by the feature, or initiative.
|an assessment of the potential value or benefit that the project or feature will bring to the users or the organization.
|The team's level of certainty in the Reach and Impact estimates.
|Effort quantifies the resources, time, and complexity required to implement the project or feature. It considers factors like development time, design work, testing, and any other resources needed. Effort is typically scored numerically, such as assigning a time estimate in hours or days.
RICE score for a particular concept = (Reach x Impact x Confidence) / Effort
I like this approach because: it appreciates that some features only benefits some customers. Without this, product development has a bias towards only doing features for 'squeakiest wheel', which is often the biggest customer.
2. MoSCoW Method
|User stories that are essential for the project's success.
|Important stories but not critical for the project's immediate success.
|Desirable stories, but not necessary
|Stories that will not be included in the current iteration.
This method helps teams categorize and focus on critical user stories while allowing flexibility for less vital ones.
I like this approach because: This is great for a first-pass at prioritization: at any point in time, there are some user stories which simply NEED to get done: these could be hygiene features or simply mandated features that your biggest customer has demanded asap prior to renewal.
3. Weighted Shortest Job First (WSJF)
This prioritization technique used in Agile and Lean software development, particularly within the context of Scaled Agile Framework (SAFe).
WSJF is calculated using the following formula:
WSJF = (CoD + CoR) / Job Size
|CoD (Cost of Delay)
|This represents the business value or impact that a particular user story or feature will deliver. It measures how much financial loss or opportunity gain would be associated with delaying the implementation of that item.
|CoR (Cost of the job in Progress)
|This factor accounts for the ongoing cost incurred while the job is in progress. It considers aspects such as the operational costs or maintenance expenses that the organization bears until the task is completed.
|This refers to the size or effort required to complete the user story or feature. It can be measured in various ways, such as story points in Agile development.
WSJF scoring focuses on end value, the pain (which is increases over time) of waiting for the product feature as well as the effort needed
I like it because: it recognises that there is cost of NOT doing a feature: inefficiency or lost revenue (or ear-bashing from a customer) whilst the feature isn't in the product.
4. Kano Model
Features are categorised based on whether customers positively or negatively value the enhancement
|Basic Needs (Must-Have)
|The essential features that customers expect as a minimum requirement ie Hygiene Features. Without these features, customer dissatisfaction is almost guaranteed. However, fulfilling these requirements typically does not lead to increased customer satisfaction.
|Performance Needs (Linear)
|Features that correlate directly with customer satisfaction. As the performance of these features improves, customer satisfaction increases. These features are often explicitly stated by customers.
|Excitement Needs (Delighters)
|unexpected or surprising features that have the potential to delight customers. When these features are present, they can significantly boost customer satisfaction, but their absence doesn't necessarily lead to dissatisfaction. Customers often do not express these desires because they haven't experienced them before. Differentiating Features
|Indifferent Needs (Neutral)
|The presence or absence of these features doesn't significantly affect overall satisfaction.
|Reverse Needs (Unwanted)
|When present, can lead to customer dissatisfaction.
I like it because: sometimes you need to build features that have sizzle, even if your user base has limited utility. As I write this in Autumn 2023, every company has to have an AI strategy, regardless of whether their market has a use for it.
Conversely, Many new product managers think that adding more functionality inevitably adds to a product. NO!! When the quality of a new feature is less than the rest of the product, it detracts. When an feature is simply odd when considering the product as a portfolio of features, it detracts because it confuses the user.
5. Value vs Effort Matrix
High Value + Low Effort are immediate candidates. Low Value + High Effort will never be done.
What's left are all the features that require more head scratching. This technique can help, but it will absolutely NOT provide a prioritised list, but it will definitely uncover some assumptions and biases - see read my article for more commentary.
I like it because: Well, this is the one that when serious prioritisation is required, this is the tool that I pullout of the bag.
Conclusion - what do I really use??
Well kinda a blend of many of the above:
- MoSCoW flushes out the hygiene features.
- The time-cost of NOT having a feature is next most important
- Ongoing investment in the Linear Features (from the Kano Model) with due regard to continuously topping up investment in Differentiating Features.
- Use Ring Fence Development (on Page 3 of my Feature Prioritisation article) to make sure you have a reasonable balance between bug fixes, hygiene features, linear development and big innovation bets.
- Once a year, do a product portfolio analysis (using Feature Prioritization and Roadmapping technique) to benchmark all significant future projects against each other. For product enhancements that are requested after the annual review, measure them against a couple of existing projects that have already been benchmarked.
RICE, WSJF, Value vs Effort all require heavy-duty future predictions and detailed analysis using difficult-to-find information from other parts of the business.
They are only really worthwhile for large products decisions whose features / technology dependencies are relatively independent of each other.
MoSCoW and Kano use simple categorisation to drop feature requests into buckets. Ring Fence Development (on Page 3 of my Feature Prioritisation article) permits the product to move forward on a number of fronts, each effectively having their own backlog.