Feature Prioritization Guide
When to Use Which Framework
| Framework | Best For | Strengths | Weaknesses |
|---|---|---|---|
| RICE | Data-driven teams, growth features | Quantitative, forces rigor | Requires reach data |
| ICE | Rapid prioritization, early-stage | Simple, fast | More subjective |
| MoSCoW | Release planning, stakeholder alignment | Clear communication | Doesn't rank within tiers |
| Kano | Understanding user delight | Captures non-obvious value | Requires user research |
| Value vs Effort | Quick decisions, workshops | Visual, intuitive | Oversimplifies |
RICE Framework
RICE = (Reach × Impact × Confidence) / Effort
Reach
Users affected per time period (quarter usually).
Sources:
- Product analytics (current feature usage)
- Funnel data (users at that stage)
- Market research (potential users)
If uncertain, use conservative estimate and note low confidence.
Impact
| Score | Meaning | Example |
|---|---|---|
| 3 | Massive | 10x improvement, new capability |
| 2 | High | Significant pain removed |
| 1 | Medium | Noticeable improvement |
| 0.5 | Low | Nice to have |
| 0.25 | Minimal | Polish |
Confidence
| Score | Meaning |
|---|---|
| 100% | Data-backed, validated |
| 80% | Strong evidence |
| 50% | Reasonable guess |
Default to 50% if you're not sure. This naturally penalizes features with high uncertainty.
Effort
Person-months of work (all disciplines).
Common mistake: Underestimating. Include:
- Engineering
- Design
- QA
- Documentation
- Marketing/launch support
ICE Framework
ICE = (Impact + Confidence + Ease) / 3
Simpler than RICE—good for:
- Fast prioritization
- When reach is hard to estimate
- Early-stage products
Each dimension scored 1-10.
Impact (1-10)
How much will this move the metric you care about?
Confidence (1-10)
How sure are you the impact estimate is right?
Ease (1-10)
How easy is this to implement? Inverse of effort.
Value vs Effort Matrix
VALUE
Low | High
--------|--------
Low | Avoid | Do Now! | <- EFFORT
|-------|---------|
High | Don't | Consider|Do Now: High value, low effort (quick wins)
Consider: High value, high effort (strategic bets)
Don't: Low value, high effort (waste)
Avoid/Fill Ins: Low value, low effort (only if nothing else)
MoSCoW Method
- Must Have: Non-negotiable for release
- Should Have: Important but not critical
- Could Have: Nice to have
- Won't Have: Explicitly out of scope
Good for communicating with stakeholders, less useful for ranking within tiers.
Common Prioritization Mistakes
Overweighting Recent Feedback
One loud customer ≠ common need. Check the data.
Ignoring Confidence
High-impact features with low confidence should be deprioritized or validated first.
HiPPO (Highest Paid Person's Opinion)
Senior opinions should inform, not override frameworks.
Sunk Cost
"We already started" isn't a reason to finish something low-priority.
Ignoring Opportunity Cost
Every feature you build means something else you're not building.
Improving Estimates
For Reach
- Look at funnel data
- Check similar feature usage
- Survey users on pain points
- Start with "what % of users would use this?"
For Impact
- Find proxy metrics from other features
- Run fake door tests
- Check competitor solutions
- Talk to users
For Effort
- Get engineering estimates (plural—multiple engineers)
- Add buffer for unknowns
- Check similar past projects
- Include all work, not just coding
For Confidence
- Be honest about uncertainty
- Default to 50% if you're guessing
- Note what would increase confidence
- Identify cheap experiments to validate
Using Prioritization Output
Prioritization frameworks inform decisions—they don't make them.
After scoring:
- Review surprising results—are the inputs wrong?
- Consider strategic factors not captured
- Look for quick wins (high score, low effort)
- Identify validation opportunities for low-confidence items
- Communicate rankings AND reasoning