ICE Scoring
Requirements
List of features to prioritize with any available context
3
Review the feature list. For each feature, check if you have minimum viable context:
- Who is affected?
- What's the impact level?
- What's the effort ballpark?
If any feature lacks this context, stop and ask for it. Point to specific gaps.
Reference Data Gathering Playbook for methods to gather missing data.
4
For each feature, score ICE components (1-10 scale):
IMPACT (How much will this move the needle?)
- 10 = Transformative, 10x improvement
- 7-9 = Major positive impact
- 4-6 = Moderate impact
- 1-3 = Minor impact
CONFIDENCE (How sure are we this will work?)
- 10 = Data-backed certainty
- 7-9 = Strong hypothesis with supporting evidence
- 4-6 = Reasonable assumption
- 1-3 = Educated guess
EASE (How easy is this to implement?)
- 10 = Trivial, hours of work
- 7-9 = Straightforward, days of work
- 4-6 = Moderate complexity, weeks
- 1-3 = Complex, months
5
Validate inputs before calculating:
- If all features score 8+ on every dimension, recalibrate. Nothing is truly excellent on all three axes—this suggests scoring inflation. Ask: "Compared to the hardest thing you've shipped, is this really that easy?"
- Flag suspiciously high Ease scores for complex features. "AI-powered assistant" at Ease=9 should raise questions.
- If Confidence is below 4 for most features, suggest gathering more data before prioritizing.
6
Calculate ICE scores:
ICE = (Impact + Confidence + Ease) / 3
Present as a ranked table:
| Feature | Impact | Confidence | Ease | ICE Score | Rank |
Show scores from highest to lowest.
7
Provide analysis:
- Compare to RICE if relevant context exists
- Note high-impact items held back by low confidence
- Identify experiments that could increase confidence
- Highlight easy wins vs strategic bets
ICE is simpler than RICE—appropriate for faster decisions or when reach is hard to estimate.
To run this task you must have the following required information:
> List of features to prioritize with any available context
If you don't have all of this information, exit here and respond asking for any extra information you require, and instructions to run this task again with ALL required information.
---
You MUST use a todo list to complete these steps in order. Never move on to one step if you haven't completed the previous step. If you have multiple read steps in a row, read them all at once (in parallel).
Add all steps to your todo list now and begin executing.
## Steps
1. [Read Feature Prioritization Guide]: Read the documentation in: `./skills/sauna/[skill_id]/references/prioritization.framework.guide.md`
2. [Read Feature Context Template]: Read the documentation in: `./skills/sauna/[skill_id]/references/prioritization.feature.template.md` (Reference for what good feature context looks like)
3. Review the feature list. For each feature, check if you have minimum viable context:
- Who is affected?
- What's the impact level?
- What's the effort ballpark?
If any feature lacks this context, stop and ask for it. Point to specific gaps.
Reference `./skills/sauna/[skill_id]/references/prioritization.data.gathering.md` for methods to gather missing data.
4. For each feature, score ICE components (1-10 scale):
IMPACT (How much will this move the needle?)
- 10 = Transformative, 10x improvement
- 7-9 = Major positive impact
- 4-6 = Moderate impact
- 1-3 = Minor impact
CONFIDENCE (How sure are we this will work?)
- 10 = Data-backed certainty
- 7-9 = Strong hypothesis with supporting evidence
- 4-6 = Reasonable assumption
- 1-3 = Educated guess
EASE (How easy is this to implement?)
- 10 = Trivial, hours of work
- 7-9 = Straightforward, days of work
- 4-6 = Moderate complexity, weeks
- 1-3 = Complex, months
5. Validate inputs before calculating:
- If all features score 8+ on every dimension, recalibrate. Nothing is truly excellent on all three axes—this suggests scoring inflation. Ask: "Compared to the hardest thing you've shipped, is this really that easy?"
- Flag suspiciously high Ease scores for complex features. "AI-powered assistant" at Ease=9 should raise questions.
- If Confidence is below 4 for most features, suggest gathering more data before prioritizing.
6. Calculate ICE scores:
ICE = (Impact + Confidence + Ease) / 3
Present as a ranked table:
| Feature | Impact | Confidence | Ease | ICE Score | Rank |
Show scores from highest to lowest.
7. Provide analysis:
- Compare to RICE if relevant context exists
- Note high-impact items held back by low confidence
- Identify experiments that could increase confidence
- Highlight easy wins vs strategic bets
ICE is simpler than RICE—appropriate for faster decisions or when reach is hard to estimate.