Interactive Tool
Pattern Selector
Answer 5 questions about your task. Get the recommended orchestration pattern and a copy-paste dispatch plan.
1 Pattern Selector
Select one answer per question. The recommendation updates live.
1 How do your tasks relate to each other?
2 How many agents do you need?
3 How critical is the outcome?
4 What does failure look like?
5 How do agents coordinate?
Answer the questions above — your recommendation will appear here.
2 All 12 Patterns — Reference
Detailed reference for each pattern. Colored left border indicates category.
01
Pipeline
2–6 stages
Linear chain: A → B → C
When to use
- Output of one stage is required input to the next
- Work has a fixed, predictable sequence
- Each stage is independently testable
Failure mode
Bottleneck at slowest stage; one bad stage blocks all downstream work
Example
"Build → Test → Deploy" — each gate must pass before next starts
02
Map-Reduce
N workers + 1 aggregator
Fan-out to N workers, fan-in to aggregator
When to use
- Same operation applied to many inputs
- Results are independently reducible
- Input set is enumerable up front
Failure mode
Aggregation logic too complex; partial failures silently drop data
Example
Lint 20 files in parallel, collect all errors into one report
03
Hierarchical
1 manager + 3–7 specialists
Tree: manager delegates to domain specialists
When to use
- Task decomposes into distinct, non-overlapping domains
- Specialists need different context or tools
- Work can be integrated by a coordinator
Failure mode
Manager becomes bottleneck when it does too much work itself rather than delegating
Example
Architecture agent delegates frontend, backend, and infra to three specialists
04
Swarm
3–20+ peers
Peer-to-peer, self-organizing, no central coordinator
When to use
- Agents can work entirely independently on non-overlapping files
- File manifests provide coordination instead of runtime messages
- Speed is more important than unified output
Failure mode
No runtime coordination means potential overlap — explicit file manifests are load-bearing
Example
5 agents each update one repo's README simultaneously with zero merge conflicts
05
Generator-Critic
2 agents, N iterations
Two-agent revision loop: generate → critique → revise
When to use
- Quality improves measurably with each revision cycle
- A second perspective catches issues the generator misses
- Output has a clear quality bar to converge toward
Failure mode
Infinite loop without a max-iteration guard; critic too lenient to add value
Example
Write code → review code → revise → review → ship after 3 clean cycles
06
Adversarial
2 teams + 1 judge
Red team + blue team + judge arbitrates
When to use
- Security review or robustness testing
- You need to stress-test a design by finding holes
- Passive review is insufficient — active attack is required
Failure mode
Adversarial agent too weak to find real issues; red team pulls punches
Example
Security audit: attacker agent finds exploits, defender agent patches, judge scores
07
Jury
3–7 independent evaluators
Independent evaluators, majority vote or synthesis
When to use
- High-stakes decisions where single-agent bias is unacceptable
- Evaluators should have genuinely different perspectives
- You need a defensible, consensus-based outcome
Failure mode
Identical agents produce identical opinions — diversity is load-bearing; homogeneous jury is theater
Example
3 agents independently review an architecture proposal; synthesis highlights disagreements
08
Blackboard
2–N agents + shared store
Shared store; agents act on new entries as they arrive
When to use
- Work is event-driven; new items trigger processing
- Agents are specialists that each handle a subset of event types
- No fixed order — agents self-schedule based on store contents
Failure mode
Race conditions on shared state; agents process same entry twice without locking
Example
CAS store where agents process new skills as they arrive, each doing a different check
09
Chain-of-Responsibility
3–5 handlers
Each agent handles or forwards to the next
When to use
- Requests need escalation through levels of capability
- Simpler handlers resolve most cases cheaply
- Complex cases should reach more expensive agents
Failure mode
No catch-all terminal agent; requests fall off the end of the chain silently
Example
Simple fix → complex fix → architectural review → human escalation
10
Circuit-Breaker
1 wrapper + downstream
Wrapper monitors downstream; trips open on repeated failures
When to use
- Downstream service is unreliable or rate-limited
- Cascading failures must be prevented
- Graceful degradation is acceptable
Failure mode
Trip threshold too sensitive — breaker opens on transient errors and never recovers
Example
API client that stops calling after 3 consecutive failures and returns cached results
11
Event-Driven
N subscribers
Reactive subscribers to event streams; pub/sub topology
When to use
- Real-time processing of asynchronous events
- Multiple consumers need the same event stream
- Processing logic is decoupled from event production
Failure mode
Event flood overwhelms slow subscribers; bounded fanout required to prevent silent drops
Example
Log watcher dispatches agents on error patterns as they stream in from multiple files
12
Meta-Pattern Selector
1 decision agent
Decision agent selects the right pattern for the task
When to use
- Pattern choice itself is the hard problem
- Task structure is ambiguous or novel
- Multiple patterns could apply and trade-offs are non-obvious
Failure mode
Selector has stale heuristics; over-fits to familiar patterns regardless of task shape
Example
This page — a structured questionnaire that routes you to the right pattern
3 Dispatch Plan Templates
Copy-paste plans for the 5 most-used patterns. Replace [placeholders] before dispatching.
The verification protocol is non-negotiable. Agent "done" is not done until independently verified with git status, grep, and a full build/test run.
Pipeline
## Dispatch Plan: Pipeline Pattern: Linear Stage Chain Stage count: [N stages] Model: sonnet (each stage) — opus only if design decisions needed ### Stage Definitions Stage 1: [name] Input: [what it receives] Output: [what it produces] Gate: [pass condition before Stage 2 begins] Task: [what to do] Stage 2: [name] Input: [output of Stage 1] Output: [what it produces] Gate: [pass condition before Stage 3 begins] Task: [what to do] # ... repeat for each stage ### Failure Handling - If any stage gate fails: [stop / retry / escalate] - Max retries per stage: [N] - On terminal failure: [rollback strategy] ### Verification Protocol After all stages complete: 1. git status — verify expected files changed 2. grep for [expected output symbols/strings] 3. go build ./... (or equivalent) 4. go test ./... 5. Read final output file — does it match Stage N spec? ### Integration Main thread wires stage outputs, runs final build, commits.
Swarm
## Dispatch Plan: Swarm (Zero-Overlap) Pattern: Zero-Overlap Agent Swarm Agent count: [N] Model: sonnet (execution tasks) # CRITICAL: file manifests are the coordination mechanism. # Each agent MUST have a non-overlapping file list. # No agent may write a file in another agent's manifest. ### Agent Manifests Agent 1: [descriptive name] Files: [explicit list of files this agent owns] Task: [what to do — scoped entirely to above files] Agent 2: [descriptive name] Files: [explicit list — no overlap with Agent 1] Task: [what to do — scoped entirely to above files] Agent 3: [descriptive name] Files: [explicit list — no overlap with Agents 1–2] Task: [what to do — scoped entirely to above files] # ... add agents as needed ### Dispatch Note - Do NOT use isolation: "worktree" — writes are silently discarded on cleanup - Dispatch all agents in parallel (same message if using Claude Code) - Verify EACH agent's output independently — do not trust self-reports ### Verification Protocol After all agents report complete: 1. git status — verify all expected files changed 2. grep for [expected symbols or patterns] in each agent's files 3. go build ./... (or equivalent) 4. go test ./... 5. Read each modified file — does it make sense? 6. Confirm zero merge conflicts ### Integration Main thread wires any cross-agent imports, runs final build, commits.
Hierarchical
## Dispatch Plan: Hierarchical Pattern: Manager + Domain Specialists Manager model: opus (architecture / synthesis decisions) Specialist model: sonnet (implementation) ### Interface Contract # Define this BEFORE dispatching any specialists. # Each specialist must know the exact interface it must satisfy. [Domain A interface / API surface] [Domain B interface / API surface] [Domain C interface / API surface] ### Manager Responsibilities - Decompose task into domain work packages - Define interface contracts for each specialist - Integrate specialist outputs - Run final verification - Does NOT implement — delegates everything ### Specialist Definitions Specialist A: [domain name] Domain: [what this agent owns] Interface: [what it must produce for integration] Task: [implementation instructions] Model: sonnet Specialist B: [domain name] Domain: [what this agent owns] Interface: [what it must produce for integration] Task: [implementation instructions] Model: sonnet # ... add specialists per domain ### Compilation Gate # All specialists must pass this gate before integration begins - Each specialist runs: go build ./... before reporting complete - Specialist output is REJECTED if build fails ### Verification Protocol After all specialists report complete: 1. git status — verify all expected files changed 2. grep for interface symbols across all specialist outputs 3. go build ./... from repo root 4. go test ./... 5. Manager reads integration seams — do interfaces align? ### Integration Manager thread wires domain outputs, resolves interface mismatches, commits.
Generator-Critic
## Dispatch Plan: Generator-Critic Pattern: Two-Agent Revision Loop Generator model: sonnet Critic model: opus (quality gate needs stronger judgment) Max iterations: [3] (REQUIRED — prevents infinite loop) Ship condition: [no critical issues in critic's final review] ### Generator Instructions Task: [what to produce] Output: [file(s) to write] Constraints: [must satisfy these requirements] On revision: apply ALL critic feedback before next submission ### Critic Instructions Review for: - [quality criterion 1] - [quality criterion 2] - [quality criterion 3] Output format: structured list of issues, severity (critical/minor), suggested fix PASS condition: [zero critical issues] FAIL action: return to generator with full issue list ### Iteration Log # Track this to enforce max-iteration guard Iteration 1: Generator produces v1 → Critic reviews Iteration 2: Generator produces v2 (fixes applied) → Critic reviews Iteration 3: Generator produces v3 → Critic reviews → SHIP or ESCALATE ### Escalation If max iterations reached without PASS: - [escalate to human / relax criteria / log for manual review] ### Verification Protocol After critic PASS: 1. git status — verify output file written 2. Run output through [automated validator if available] 3. Main thread does final read — does it meet the original goal? 4. Commit with iteration count in commit message
Jury
## Dispatch Plan: Jury Pattern: Independent Evaluators + Synthesis Juror count: [3 or 5] (odd number for clear majority) Juror model: sonnet Synthesizer model: opus (finds meaningful disagreements) # CRITICAL: jurors must NOT see each other's reviews before submitting. # Dispatch all jurors in parallel. Synthesize only after all complete. ### Subject Under Review [what is being evaluated — be specific] [link or paste the artifact to review] ### Juror Instructions (identical for all jurors) Evaluate independently. Do not assume other jurors exist. Score each criterion 1–5. Give a PASS/FAIL verdict. Explain your reasoning in 2–4 sentences per criterion. Criteria: - [criterion 1, e.g. correctness] - [criterion 2, e.g. completeness] - [criterion 3, e.g. maintainability] ### Juror Differentiation # Give each juror a different lens to ensure genuine diversity Juror 1 perspective: [e.g. security-first] Juror 2 perspective: [e.g. performance-first] Juror 3 perspective: [e.g. maintainability-first] # Without differentiation, identical agents = identical opinions = theater ### Synthesis Instructions (run AFTER all jurors complete) - Tally scores per criterion - Identify points of agreement (≥2/3 consensus) - Surface all disagreements — these are the real findings - Produce PASS/FAIL overall with confidence level - List top 3 required changes if FAIL ### Verification Protocol After synthesis: 1. Read all juror outputs — check for lazy reviews (very short, no reasoning) 2. Confirm synthesizer found genuine disagreements, not just summary 3. Apply required changes from synthesis output 4. Re-run jury if major changes were made