Claude Code · Copy-Paste Starter Kit

Templates

Every template on this page is ready to copy directly into your terminal or editor.

1 CLAUDE.md Starter Template

CLAUDE.md is the persistent instruction file Claude Code reads at the start of every session. It is the single highest-leverage file in any repo — it converts session-to-session knowledge drift into zero-drift automation. Every section below is required; none are optional.

i

Placement: commit CLAUDE.md to the root of every repo you use with Claude Code. For a monorepo, also place sub-CLAUDEs in any subdirectory that has its own build system or language.

Minimal Template

Fill in every [PLACEHOLDER]. Delete comments before committing — they are for orientation only.

CLAUDE.md minimal template · ~30 lines
# [PROJECT NAME]

## Project Overview
# What this repo is, who runs it, what problem it solves.
# Include: primary language(s), stack, and any critical context
# a new session agent needs to not make wrong assumptions.
Primary language: [Go / TypeScript / Python / Rust]
Stack: [e.g. Svelte 5 frontend, Go backend, PostgreSQL]
Owner: [team or person]
Purpose: [one sentence]

## Build & Test Commands
# These are the exact commands Claude runs after every code change.
# Never use vague commands like "run tests" — list the exact strings.
Build: [e.g. go build ./...]
Test:  [e.g. go test ./... or npm run test]
Lint:  [e.g. golangci-lint run or npm run lint]
Dev:   [e.g. wrangler dev or npm run dev]

## Conventions
# List 3–5 non-obvious conventions that differ from language defaults.
# Each entry should be one line: what to DO, not what NOT to do.
- [e.g. Event handlers: onclick={handler} — NOT on:click (Svelte 4, removed)]
- [e.g. Always return HTTP 501 for unimplemented interface methods]
- [e.g. Use module-relative paths, not process.cwd(), for asset loading]

## Debugging Protocol
# These three rules prevent the most common time-wasting loops.
- Root-cause first: trace the bug to its origin line before writing any fix.
- Config-before-code: check .env, settings.json, wrangler.toml for stale URLs/keys BEFORE debugging code.
- Full suite: never report a fix complete without running the full test suite (not just the failing test).

## Architecture Patterns
# Add project-specific patterns here as you discover them.
# Examples: routing conventions, state management, API contract rules.
[Add patterns as you build — this section grows over time]

Filled Example — Go + Svelte 5 Project

This is what a mature CLAUDE.md looks like after a few sessions of use.

CLAUDE.md — example: AgenticGateway filled example
# AgenticGateway

## Project Overview
Primary language: Go (backend), TypeScript/Svelte 5 (dashboard frontend)
Stack: Go HTTP server, D1 (SQLite), Cloudflare Workers, Svelte 5
Owner: DojoGenesis
Purpose: LLM request routing, skill execution, and RAG pipeline for the Dojo platform.
Default port: 7340 (not 8080 — the CLI connects at localhost:7340).

## Build & Test Commands
Build: go build ./...
Test:  go test ./...
Dev:   wrangler dev (run from repo root, not subdirectory)
Note:  Walk up from edited file to find go.mod root before running build.

## Conventions
- Svelte 5: onclick={handler} — NEVER on:click (compiler rejects on: directives)
- Return HTTP 501 for unimplemented interface methods — gaps visible, not silent
- SSE stores: batch with requestAnimationFrame — per-chunk store updates cause UI hangs
- Routing: grep all router.GET/POST calls before touching any handler (two handler paths exist on different routes)
- Context injection: thread per-request config via context.Context, not function args

## Debugging Protocol
- Root-cause first: identify the exact line where the problem originates, not where symptom appears.
- Config-before-code: check settings.json, .env, wrangler.toml for stale URLs before debugging code. 3+ sessions burned on this.
- Full suite: run go test ./... (not a single test file) before marking any fix complete.

## Architecture Patterns
- Gateway has TWO independent handler paths on different routes — audit all router.* calls before any routing fix.
- CLI pushes API keys to gateway on startup; gateway hot-registers providers on receipt.
- Explicit provider/model in a request always wins over content-based intent classifier routing.
- D1Syncer injection pending in Wave 2C — stub returns 501 until wired.
- Sub-agents start cold — they do not inherit memory or session context. Always pass full context in the prompt.

2 Agent Dispatch Prompt Template

This is the exact text you type to dispatch a parallel agent. The file manifest block is the most critical part — it prevents agents from overwriting each other's work. The quality bar section sets the standard without writing the implementation for the agent.

!

Never skip the file manifest. Without it, two parallel agents will write to the same file and one will silently overwrite the other. Every agent dispatch must name its owned files explicitly.

dispatch prompt — simple (single file) beginner
You are implementing a single focused task. Read the context files listed below before writing anything.

FILE MANIFEST
You own ONLY these files:
  - [path/to/target-file.go]
Do NOT modify any other files. Do NOT create new files outside this list.

CONTEXT — read these first
  - [path/to/interface-or-spec.go]  — the interface you must satisfy
  - [path/to/existing-similar.go]   — existing implementation to match style

REQUIREMENTS
Implement [FunctionName] in [path/to/target-file.go].
The function must:
  1. [Requirement 1]
  2. [Requirement 2]
  3. [Requirement 3]

CONSTRAINTS
  - Do NOT change function signatures
  - Do NOT add new dependencies without approval
  - Match the naming and style of existing code in this repo

QUALITY BAR
The implementation is done when: it compiles with `go build ./...` and
passes `go test ./...` with no new failures.

OUTPUT
Write the implementation. Then run `go build ./...` and report the result.
dispatch prompt — medium (multi-file feature) intermediate
You are implementing a complete feature. Read all context files before writing anything.
You are working in parallel with other agents — your file manifest is your exclusive territory.

FILE MANIFEST
You own ONLY these files (create them if they don't exist):
  - [server/handlers/handle_[feature].go]
  - [server/handlers/handle_[feature]_test.go]
  - [server/[feature].go]
Do NOT modify: main.go, router.go, or any file not listed above.

CONTEXT — read these first
  - [server/handlers/handle_existing.go]  — handler pattern to follow exactly
  - [server/types.go]                     — shared types
  - [server/database/db.go]               — database interface (use these methods only)
  - README or spec file: [path/to/spec.md]

REQUIREMENTS
Implement the [FeatureName] feature:
  1. Handler: POST /api/[feature] — accepts [RequestType], returns [ResponseType]
  2. Service layer: [feature].go — business logic, no HTTP concerns
  3. Tests: at least 3 table-driven test cases covering happy path, missing field, and not-found

CONSTRAINTS
  - The handler registers itself — do NOT modify the router file
  - Return HTTP 501 for any method you do not implement (makes gaps visible)
  - Use context.Context as first param on all service functions
  - No new packages — use only what is already imported in existing handlers

QUALITY BAR
Done when:
  1. `go build ./...` exits 0
  2. `go test ./server/handlers/...` exits 0
  3. The new handler is reachable and returns a non-500 response to a valid request

OUTPUT
Write all files. Then run `go build ./...` and `go test ./server/handlers/...`.
Report exact output of both commands. If either fails, fix before reporting done.
dispatch prompt — complex (cross-module with test requirements) advanced
You are Track [A/B/C] of a parallel implementation. Tracks A, B, C run simultaneously.
Your file manifest is your exclusive territory — writes outside it will cause merge conflicts.
You do NOT have session context from the main thread. Everything you need is below.

FILE MANIFEST — Track [A/B/C] exclusive ownership
You own ONLY:
  - [module1/package/file1.go]
  - [module1/package/file1_test.go]
  - [module1/package/types.go]        — create if missing
Do NOT touch: [module2/], [module3/], main.go, any *_test.go outside your package.

INTERFACE CONTRACT — agreed with other tracks before dispatch
Your package must expose:
  type [TypeName] struct { ... }         // field list
  func New[TypeName]([params]) *[TypeName]
  func (t *[TypeName]) [Method1]() error
  func (t *[TypeName]) [Method2](ctx context.Context, [params]) ([ReturnType], error)
Track B will import this package. Your interface is frozen — do not change signatures.

CONTEXT — read these files before writing
  - [adjacent/similar.go]         — style reference (match exactly)
  - [module1/go.mod]              — module path (your import paths must match)
  - [spec/[feature]-spec.md]      — full feature specification

REQUIREMENTS
  1. Implement [TypeName] as specified in the interface contract above
  2. Write table-driven tests covering: happy path, nil input, context cancellation, error propagation
  3. All exported symbols must have godoc comments
  4. Use structured logging (slog) — no fmt.Println in production code

CONSTRAINTS
  - Build must pass from the go.mod root of [module1/] — run from there, not the file's directory
  - No new third-party dependencies — use only stdlib + what is already in go.mod
  - Return 501 sentinel errors for any interface method that cannot be fully implemented in this track
  - Do not import from Track B or Track C packages (they don't exist yet)

QUALITY BAR
Track [A/B/C] is complete when ALL of the following are true:
  1. `go build ./...` from [module1/] exits 0 with zero errors
  2. `go test ./[package]/...` exits 0 — all tests pass
  3. `go vet ./...` exits 0
  4. The interface contract methods are present and correctly typed (grep to verify)
  5. Every exported symbol has a godoc comment

VERIFICATION COMMANDS — run these and paste output
  cd [module1/] && go build ./... && go test ./... && go vet ./...

OUTPUT FORMAT
After writing all files:
  1. Run the verification commands above
  2. Paste the exact output (not a summary)
  3. State: TRACK [A/B/C] COMPLETE or TRACK [A/B/C] BLOCKED: [reason]
+

After dispatching 3+ agents: use TodoWrite to track per-agent status. Verify EACH agent's output independently — never trust agent self-reports. Run git status and grep for expected symbols after each agent completes.

3 Seed Frontmatter Template

A seed is a compressed unit of hard-won knowledge — a pattern extracted from a real failure or success and written so it can be applied without re-learning the lesson. Seeds live in your MEMORY.md index and are applied via /apply-seed in future sessions.

Template

seed-[slug].md seed format
---
name: [Short noun phrase — the pattern's name]
description: [One sentence: what problem this solves]
type: [debugging | architecture | agent | workflow | config]
---

# [Same as name above]

[2–4 sentences describing the pattern in plain language. What situation triggers it?
What does applying it look like in practice? Write as if explaining to your future
self who has forgotten this session entirely.]

**Why:** [The root cause or underlying dynamic that makes this pattern necessary.
Not "because it was broken" — the structural reason it keeps happening.]

**How to apply:**
1. [First concrete action]
2. [Second concrete action]
3. [Third concrete action — keep to 3–5 steps max]

**Evidence:** [What happened in the session that generated this seed.
Commit SHA, file path, or error message if available. Specificity matters —
vague evidence makes the seed untrustworthy.]

**Diagnostic:** [A question or signal that tells you this seed applies to your current situation.
E.g.: "If you are about to edit a handler file without first grepping all router.* calls — apply this seed."]

Filled Example — from a real debugging session

seed-two-handler-trap.md filled example
---
name: Two-Handler Trap
description: Audit ALL handler files before fixing routing bugs; a codebase can have two separate handler paths on different routes pointing to the same resource.
type: debugging
---

# Two-Handler Trap

When routing bugs appear, the instinct is to fix the one handler you can see.
But if the codebase has grown through multiple contributors or refactoring passes,
there may be a second handler registered on a different route for the same resource.
Patching one while the other is live produces inconsistent behavior that is nearly
impossible to reproduce in tests.

**Why:** Routers accumulate registrations incrementally. A refactor that moves a handler
to a new path often leaves the old registration in place. No compiler catches a double
registration — it silently routes some requests to v1, others to v2.

**How to apply:**
1. Before touching any handler: grep all router.GET/POST/PUT/DELETE/Handle calls across ALL files.
2. Build a map: route path → handler function → file. Look for duplicates on the same HTTP method.
3. Decide which registration is canonical before writing a single line of fix.
4. Remove the orphan registration, not just patch the handler.
5. Run integration tests against the actual port — unit tests cannot catch double-registration.

**Evidence:** AgenticGatewayByDojoGenesis — two independent chat handler files existed on different
routes (/api/chat and /v1/chat). Patching handle_chat.go had no effect because traffic
was routing to handle_chat_v2.go. Discovered Apr 9, 2026 after 90 minutes of misdiagnosis.
See seed commit: two-handler-trap discovery session.

**Diagnostic:** If you are about to edit a handler file because a route is misbehaving, and you
have NOT yet run `grep -r "router\." --include="*.go" | grep "[your-path]"` across every
file in the repo — apply this seed before touching any code.
i

When to write a seed: After any session where you wasted 30+ minutes on a mistake you could have avoided. The test: "Would this pattern have prevented that loss?" If yes, write the seed before the session ends. Seeds written after the session rarely capture the exact diagnostic.

4 Verification Checklist Template

Run this checklist after every agent completes. Agents frequently report "done" without persisting writes, or persist writes that do not compile. The checklist takes under two minutes and catches the most common failure modes before they compound.

!

Agent "done" is not done until independently verified. Never mark a task complete based on an agent's self-report alone. This checklist is your gate.

post-agent verification sweep run after every agent
# Post-Agent Verification Sweep
# Run these commands in order. Each one gates the next.

# 1. Are the expected files present/modified?
git status

# 2. Do change counts match what was requested?
git diff --stat

# 3. Does the implementation contain the expected symbols?
grep -r "[ExpectedFunctionName]" [path/to/package/]

# 4. Does it compile? (Go example — replace with stack equivalent)
go build ./...

# 5. Do tests pass?
go test ./...

# 6. Read the actual implementation — does it make sense?
# Do NOT skip this step. Agents can produce plausible-looking but wrong code.
cat [path/to/new-file.go] | head -60

As a Numbered Checklist

  1. 1
    git status

    Confirm the expected files appear as Modified or new Untracked. If the agent reported writing 4 files and only 1 is modified, the agent did not persist its writes — dispatch again with explicit write instructions.

  2. 2
    git diff --stat

    Change counts should be proportional to the task. A feature implementation showing +3 lines is a red flag. A refactor showing +900 lines is a red flag. Verify the magnitude matches the request.

  3. 3
    grep -r "[ExpectedFunctionName]" [path/to/package/]

    Check that the implementation contains the specific symbols you asked for. Agents sometimes write structurally valid code that implements the wrong interface or omits a key method.

  4. 4
    go build ./...

    (Replace with your stack's equivalent: cargo check, tsc --noEmit, npm run build.) The compilation gate is the minimum bar for correctness. A change that does not compile is not done.

  5. 5
    go test ./...

    Run the full test suite — not just the one test that was failing. Agents frequently fix the targeted test while introducing a regression in an adjacent test. The full suite is the only honest gate.

  6. 6
    Read the actual files

    Open the written files and read enough to verify the logic makes sense. Compilation and tests can pass while the implementation is technically wrong. This step is non-automatable.

Red Flags — signs an agent falsely reported completion
  • git status is clean after an agent claims to have written 3 new files — the writes were not persisted (common with worktree isolation or session timeouts).
  • Only the targeted test passes but the agent ran no others — "all tests pass" was a false claim.
  • The implementation is ~5 lines for a feature that should require 50+ — the agent wrote a stub and called it done.
  • Function bodies contain panic("not implemented") or empty returns — the agent scaffolded, not implemented.
  • No error handling in any branch of a Go function — the agent skipped error propagation to save tokens.
  • The agent reports a build error, then immediately says "fixed" without showing the fix — the fix was hallucinated.
  • Import paths reference packages that do not exist in the go.mod — the agent invented a dependency.

5 Convergence Ledger Entry Template

The convergence ledger is a markdown file (~/.claude/convergence-ledger.md) that records the state of all active repos after each convergence gate. A convergence gate fires when drift accumulates: YELLOW at 10+ dirty files or 4+ sessions since last commit; RED at 25+ or 6+. Each entry below is one gate event.

i

When to write an entry: After every convergence sweep — when you commit a batch of work, triage open items, and assess release readiness. The entry is not a log of what you did; it is a snapshot of where you are and what decisions were made.

Entry Template

~/.claude/convergence-ledger.md — entry convergence gate format
## Convergence — YYYY-MM-DD

- Sessions consumed: N   # sessions since last ledger entry
- Drift severity: YELLOW  # YELLOW (10+ dirty files or 4+ sessions) | RED (25+ or 6+)
- Commits landed: N across M repos

- Validations: N pass, M fail
  # List each failure briefly:
  - FAIL: [repo]: [what failed] — [blocked / will fix in next session]
  - PASS: [repo]: go build ./... + go test ./...

- Open items:
  - Next (will tackle next session): [item 1], [item 2]
  - Parking lot (acknowledged, not urgent): [item A], [item B]
  - Killed (will not do): [item X] — [reason it was cut]

- Release readiness:
  - [repo@version]: READY
  - [repo@version]: BLOCKED([reason — specific, not vague])
  - [repo@version]: IN PROGRESS([current milestone])

- Deploy retro:
  # What surprised you in this convergence sweep?
  # What should become a seed?
  - Surprised by: [observation]
  - Seed candidate: [pattern name — write the full seed separately]

- Strategic assessment:
  [One paragraph. Is the project on track? Is the scope creeping?
  What is the highest-leverage next move? Write this as if briefing
  your future self after a 2-week break from the project.]

Filled Example

convergence-ledger.md — example entry filled example
## Convergence — 2026-04-14

- Sessions consumed: 4
- Drift severity: YELLOW (13 dirty files, 4 sessions since Apr 10 commit)
- Commits landed: 9 across 3 repos (gateway: 4, cli: 3, pdi: 2)

- Validations: 5 pass, 1 fail
  - FAIL: pdi: BLS LAUS fetch returns empty — rate-limited by API; script logic is correct, re-run after 24h reset
  - PASS: gateway: go build ./... + go test ./... clean at 96aa95d
  - PASS: cli: go build ./... clean at cc81104
  - PASS: pdi: PostGIS 22,949 rows loaded, all non-BLS indicators present
  - PASS: gateway: Wave 2C RAG handler returns 200 on POST /api/documents

- Open items:
  - Next: gateway protocol module merge (2→1); PDI BLS re-run after rate limit reset
  - Parking lot: goreleaser `brews` → `homebrew_casks` deprecation warning; HTMLCraft v3.5 Polish
  - Killed: DojoChat new features — hibernated, resume after CLI ships (decision Apr 9)

- Release readiness:
  - gateway@v3.0.0: READY (Wave 1+2A+2B+2C complete)
  - cli@v1.0.0: READY (Homebrew live at DojoGenesis/tap)
  - pdi@HEAD: BLOCKED(BLS LAUS data null — re-run pending)
  - htmlcraft@v3.6: READY (140 tests pass, backend frozen)

- Deploy retro:
  - Surprised by: PDI Python pipeline had 14 columns richer than the canonical source. Audit-before-sync seed proved its value.
  - Seed candidate: Government API Batch Silence — batch params silently drop series; single-series diagnostic first

- Strategic assessment:
  Gateway and CLI are in a stable release state. The primary risk is the PDI BLS gap — it
  is a data gap, not a code gap, and resolves automatically after the rate limit resets.
  The highest-leverage next move is the gateway protocol module merge: it reduces surface area
  before adding new consumers, and is a 1-session task with zero integration risk. After that,
  resume HTMLCraft v3.5 Polish which has been blocked by higher-priority work since April 9.
  DojoChat hibernation remains the right call — premature resumption would fragment focus
  at exactly the moment CLI distribution needs maintenance attention.
+

Ledger discipline: The strategic assessment paragraph is the most valuable part. A ledger entry without it is just a log. Write the assessment as if you will not return to this project for two weeks — because you might not.