slice icon Context Slice

Research Synthesis

Aggregating Parallel Findings

When multiple branches return findings simultaneously:

1. Identify Themes

Group related facts across branches:

  • Same entity mentioned in different contexts
  • Cause-effect relationships spanning branches
  • Timeline connections
  • Contradictions requiring resolution

2. Build Narrative

Connect findings into coherent story:

  • Lead with most important/surprising findings
  • Use timeline to structure when appropriate
  • Highlight connections between branches
  • Note where gaps remain

3. Preserve Provenance

Every synthesized claim must trace to sources:

  • Inline citations: "According to [Source], [claim]"
  • Footnote style for complex claims with multiple sources
  • Flag when claim is inference vs. direct quote

Conflict Resolution

When sources disagree:

Assessment Framework

Factor Higher Trust Lower Trust
Recency More recent Older
Authority Primary source, expert Secondary, generalist
Incentive Neutral party Vested interest
Specificity Precise claims Vague assertions
Corroboration Multiple independent sources Single source

Resolution Strategies

When one source clearly more reliable:

  • Lead with reliable source's claim
  • Note existence of contrary view
  • Explain why you weighted as you did

When genuinely unclear:

  • Present both perspectives
  • Mark as "contested" or "unclear"
  • Suggest this as area for deeper research

When different but compatible:

  • Synthesize into fuller picture
  • "Source A says X, while Source B adds Y"

Confidence Scoring

Claim-Level Confidence

High Confidence (✓✓✓)

  • Multiple independent authoritative sources agree
  • Primary source data available
  • Recent and unlikely to have changed
  • Example: "Apple announced the iPhone in 2007"

Medium Confidence (✓✓)

  • Single authoritative source
  • Older but stable information
  • Expert consensus without primary data
  • Example: "Most analysts expect market growth of 15-20%"

Low Confidence (✓)

  • Limited sourcing
  • Conflicting information
  • Rapidly changing domain
  • Inference from incomplete data
  • Example: "Company X appears to be pivoting strategy"

Section-Level Confidence

Aggregate claim confidence to section level:

  • High: >80% of claims are high confidence
  • Medium: Mix of confidence levels
  • Low: Many uncertain claims, significant gaps

Final Report Structure

Executive Summary

  • 3-5 sentences max
  • Most important findings only
  • No citations (they're in body)
  • Answer: "What do I need to know?"

Key Findings

  • Organized by theme, not by search order
  • Each finding supported by sources
  • Confidence indicated where not high
  • Balance depth with readability

Timeline & Context

  • Chronological structure
  • Only include if adds value
  • Connect past to present to future

Key Players

  • Brief, scannable format
  • Focus on relevance to topic
  • Note relationships between players

Open Questions

  • Honest about what's unknown
  • Prioritize by importance
  • Suggest how to resolve if possible

Confidence Assessment

  • Transparent about limitations
  • Group claims by confidence level
  • Helps reader calibrate trust

Sources

  • Organized for findability
  • Note which sections used which sources
  • Include access dates for web sources

Research Map Maintenance

Keep the map updated throughout:

JSON State Structure

The research map state is stored in uiResearch Map as JSON:

{
  "topic": "Quantum Annealing",
  "depth": "Moderate",
  "round": 2,
  "explored": [
    {
      "branch": "Fundamentals",
      "summary": "QA uses quantum tunneling to find global minima in optimization problems",
      "sources": ["https://arxiv.org/...", "https://d-wave.com/..."],
      "confidence": "high"
    },
    {
      "branch": "Players",
      "summary": "D-Wave dominates commercial QA; Fujitsu and others offer digital alternatives",
      "sources": ["https://d-wave.com/...", "https://fujitsu.com/..."],
      "confidence": "high"
    }
  ],
  "available": [
    {"branch": "Timeline", "preview": "History of QA development and key milestones"},
    {"branch": "Economics", "preview": "Market size, pricing models, ROI cases"}
  ],
  "suggested_deep_dives": [
    "What concrete benchmarks show QA outperforming classical?",
    "How does embedding overhead affect practical problem sizes?"
  ],
  "history": [
    {"round": 1, "branches": ["Fundamentals", "Players"], "timestamp": "2026-01-13T10:00:00Z"},
    {"round": 2, "branches": ["Applications"], "timestamp": "2026-01-13T10:15:00Z"}
  ]
}

After Each Round

  • Move explored branches to "Explored" section
  • Add one-line summary of key finding
  • Update source count
  • Revise "Available to Explore" based on new knowledge
  • Add new "Suggested Deep Dives" from findings
  • Increment the round counter
  • Add entry to history array

Visual Indicators (Markdown Display)

  • Explored (with summary)
  • Available (with preview of what we'd learn)
  • [?] Uncertain (conflicting info, needs more research)
  • [!] Important (high-value unexplored area)

Suggested Deep Dives

Generate from:

  • Questions raised by findings
  • Connections between branches
  • Contradictions needing resolution
  • User's likely follow-up interests