task icon Task

Score and Prioritize Initiatives

Requirements
List of initiatives to prioritize. Optional: preferred scoring framework (RICE or ICE), existing estimates for any factors.
2

Clarify the prioritization approach:

RICE (more rigorous, recommended for larger teams):

  • Reach: How many users/customers affected per quarter?
  • Impact: How much improvement per user? (3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal)
  • Confidence: How sure are you? (100%=high, 80%=medium, 50%=low)
  • Effort: Person-months to complete

ICE (simpler, good for quick decisions):

  • Impact: 1-10 scale
  • Confidence: 1-10 scale
  • Ease: 1-10 scale (inverse of effort)

Ask which framework they prefer, or recommend RICE for strategic planning and ICE for quick triage.

3

For each initiative, gather the scoring inputs:

Walk through each initiative and help estimate:

  1. Reach: "How many users would this affect in a quarter?"
    • Use data if available (active users, segment size)
    • Estimate based on feature usage patterns
  2. Impact: "If this works perfectly, how much would it improve the user's life?"
    • 3 = Massive (fundamentally changes experience)
    • 2 = High (significant improvement)
    • 1 = Medium (noticeable improvement)
    • 0.5 = Low (minor improvement)
    • 0.25 = Minimal (barely noticeable)
  3. Confidence: "How confident are you in these estimates?"
    • 100% = We have data, we've done this before
    • 80% = Strong intuition, some validation
    • 50% = Gut feel, needs discovery
  4. Effort: "How many person-months would this take?"
    • Include design, engineering, QA, launch

Record all inputs for transparency.

4

Calculate scores and present prioritized ranking:

RICE Score = (Reach × Impact × Confidence) / Effort

Present as a table:

Rank Initiative Reach Impact Confidence Effort RICE Score

Sort by score descending.

For ICE: Score = Impact × Confidence × Ease

Add observations:

  • Which initiatives have high scores but low confidence? (Discovery candidates)
  • Which have low effort but decent impact? (Quick wins)
  • Which have high scores but high dependencies? (Sequence carefully)
5

Provide prioritization recommendations:

Based on the scores, suggest:

  • Do First (top 2-3 by score, high confidence)
  • Validate First (high potential but low confidence—need discovery)
  • Quick Wins (low effort, positive impact—fill gaps)
  • Deprioritize (low scores, not worth the effort now)

Remind user: RICE is a tool for discussion, not a formula for decisions.
Strategic alignment, dependencies, and team capacity also matter.

Offer to recalculate if they want to adjust any estimates.

                    To run this task you must have the following required information:

> List of initiatives to prioritize. Optional: preferred scoring framework (RICE or ICE), existing estimates for any factors.

If you don't have all of this information, exit here and respond asking for any extra information you require, and instructions to run this task again with ALL required information.

---

You MUST use a todo list to complete these steps in order. Never move on to one step if you haven't completed the previous step. If you have multiple read steps in a row, read them all at once (in parallel).

Add all steps to your todo list now and begin executing.

## Steps

1. [Read Roadmap Guide]: Read the documentation in: `./skills/sauna/[skill_id]/references/product.roadmap.guide.md` (Reference prioritization frameworks section)

2. Clarify the prioritization approach:

**RICE** (more rigorous, recommended for larger teams):
- Reach: How many users/customers affected per quarter?
- Impact: How much improvement per user? (3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal)
- Confidence: How sure are you? (100%=high, 80%=medium, 50%=low)
- Effort: Person-months to complete

**ICE** (simpler, good for quick decisions):
- Impact: 1-10 scale
- Confidence: 1-10 scale
- Ease: 1-10 scale (inverse of effort)

Ask which framework they prefer, or recommend RICE for strategic planning and ICE for quick triage.


3. For each initiative, gather the scoring inputs:

Walk through each initiative and help estimate:
1. **Reach**: "How many users would this affect in a quarter?"
   - Use data if available (active users, segment size)
   - Estimate based on feature usage patterns
2. **Impact**: "If this works perfectly, how much would it improve the user's life?"
   - 3 = Massive (fundamentally changes experience)
   - 2 = High (significant improvement)
   - 1 = Medium (noticeable improvement)
   - 0.5 = Low (minor improvement)
   - 0.25 = Minimal (barely noticeable)
3. **Confidence**: "How confident are you in these estimates?"
   - 100% = We have data, we've done this before
   - 80% = Strong intuition, some validation
   - 50% = Gut feel, needs discovery
4. **Effort**: "How many person-months would this take?"
   - Include design, engineering, QA, launch

Record all inputs for transparency.


4. Calculate scores and present prioritized ranking:

**RICE Score** = (Reach × Impact × Confidence) / Effort

Present as a table:
| Rank | Initiative | Reach | Impact | Confidence | Effort | RICE Score |
|------|-----------|-------|--------|------------|--------|------------|

Sort by score descending.

**For ICE**: Score = Impact × Confidence × Ease

Add observations:
- Which initiatives have high scores but low confidence? (Discovery candidates)
- Which have low effort but decent impact? (Quick wins)
- Which have high scores but high dependencies? (Sequence carefully)


5. Provide prioritization recommendations:

Based on the scores, suggest:
- **Do First** (top 2-3 by score, high confidence)
- **Validate First** (high potential but low confidence—need discovery)
- **Quick Wins** (low effort, positive impact—fill gaps)
- **Deprioritize** (low scores, not worth the effort now)

Remind user: RICE is a tool for discussion, not a formula for decisions.
Strategic alignment, dependencies, and team capacity also matter.

Offer to recalculate if they want to adjust any estimates.