RICE Scoring
Review the feature list. For each feature, check if you have minimum viable context:
- Who is affected?
- How many users (rough count)?
- What's the impact level?
- What's the effort ballpark?
If any feature lacks this context, stop and ask for it. Point to specific gaps.
Reference Data Gathering Playbook for methods to gather missing data.
For each feature, gather or estimate RICE components:
REACH (How many people will this impact?)
- Users/customers affected per time period
- Be specific: "500 users/month" not "a lot"
IMPACT (How much will it impact each person?)
- 3 = Massive (game-changer)
- 2 = High (significant improvement)
- 1 = Medium (noticeable)
- 0.5 = Low (minor)
- 0.25 = Minimal (barely noticeable)
CONFIDENCE (How sure are we about these estimates?)
- 100% = High confidence, data-backed
- 80% = Medium confidence, strong assumptions
- 50% = Low confidence, educated guess
EFFORT (How much work in person-months?)
- Engineering, design, QA, etc.
- Be honest about hidden complexity
Validate inputs before calculating:
- Effort must be greater than zero. If someone says "zero effort," ask what's really involved—even a config change has some cost.
- If Reach is zero, confirm this is intentional (internal tool, future users, etc.) The score will be 0.
- Flag suspiciously low effort for complex features. "AI-powered assistant" at 1 person-week should raise questions.
Calculate RICE scores:
RICE = (Reach × Impact × Confidence) / Effort
Present as a ranked table:
| Feature | Reach | Impact | Confidence | Effort | RICE Score | Rank |
Show scores from highest to lowest.
Provide analysis:
- Highlight any surprising results
- Note where low confidence affects rankings
- Identify quick wins (high RICE, low effort)
- Flag high-effort items that might be worth splitting
Call out key assumptions that could change the ranking.
To run this task you must have the following required information:
> List of features to prioritize with any available context
If you don't have all of this information, exit here and respond asking for any extra information you require, and instructions to run this task again with ALL required information.
---
You MUST use a todo list to complete these steps in order. Never move on to one step if you haven't completed the previous step. If you have multiple read steps in a row, read them all at once (in parallel).
Add all steps to your todo list now and begin executing.
## Steps
1. [Read Feature Prioritization Guide]: Read the documentation in: `./skills/sauna/[skill_id]/references/prioritization.framework.guide.md`
2. [Read Feature Context Template]: Read the documentation in: `./skills/sauna/[skill_id]/references/prioritization.feature.template.md` (Reference for what good feature context looks like)
3. Review the feature list. For each feature, check if you have minimum viable context:
- Who is affected?
- How many users (rough count)?
- What's the impact level?
- What's the effort ballpark?
If any feature lacks this context, stop and ask for it. Point to specific gaps.
Reference `./skills/sauna/[skill_id]/references/prioritization.data.gathering.md` for methods to gather missing data.
4. For each feature, gather or estimate RICE components:
REACH (How many people will this impact?)
- Users/customers affected per time period
- Be specific: "500 users/month" not "a lot"
IMPACT (How much will it impact each person?)
- 3 = Massive (game-changer)
- 2 = High (significant improvement)
- 1 = Medium (noticeable)
- 0.5 = Low (minor)
- 0.25 = Minimal (barely noticeable)
CONFIDENCE (How sure are we about these estimates?)
- 100% = High confidence, data-backed
- 80% = Medium confidence, strong assumptions
- 50% = Low confidence, educated guess
EFFORT (How much work in person-months?)
- Engineering, design, QA, etc.
- Be honest about hidden complexity
5. Validate inputs before calculating:
- Effort must be greater than zero. If someone says "zero effort," ask what's really involved—even a config change has some cost.
- If Reach is zero, confirm this is intentional (internal tool, future users, etc.) The score will be 0.
- Flag suspiciously low effort for complex features. "AI-powered assistant" at 1 person-week should raise questions.
6. Calculate RICE scores:
RICE = (Reach × Impact × Confidence) / Effort
Present as a ranked table:
| Feature | Reach | Impact | Confidence | Effort | RICE Score | Rank |
Show scores from highest to lowest.
7. Provide analysis:
- Highlight any surprising results
- Note where low confidence affects rankings
- Identify quick wins (high RICE, low effort)
- Flag high-effort items that might be worth splitting
Call out key assumptions that could change the ranking.