kcdist
v1.0.14
Published
Claude Code distribution manager
Readme
claude-distribution
Self-contained Claude Code distribution — hooks, rules, skills, and lessons — designed to be cloned/symlinked/submoduled as .claude/ in target projects. Single source of truth for harness configuration across every project.
Owned by an AI Engineer working on harness + context engineering and solution architecture. Skills and hooks are tuned for that workflow: aggressive context discipline, root-cause debugging, plan-first implementation, cross-session memory.
Table of contents
- Quick start
- Architecture
- Skills catalog (27 skills)
- Hooks catalog (15 hooks)
- Lessons
- Rules
- Customization
- Conventions
Quick start
Add to a project
# Option A — git submodule (recommended for shared projects)
git submodule add <repo-url> .claude
git submodule update --remote .claude
# Option B — symlink (one repo, all projects share same config)
git clone <repo-url> ~/work/archives/cc-distribution
ln -s ~/work/archives/cc-distribution /path/to/project/.claudeOne-time per project: install hook tools
.claude/setup-hooksThis builds a Python venv at .claude/.venv/ and installs lint tools used by hooks: ruff, black, isort, flake8.
uv is required (brew install uv or astral.sh/uv).
Optional: global setup
For statusline + global ~/.claude/ configuration, run once per machine:
~/work/archives/cc-distribution/.setup_globalArchitecture
cc-distribution/ ← cloned/symlinked as .claude/ in projects
├── settings.json ← hook wiring, permissions (committed, shared)
├── .claude/settings.local.json ← when working INSIDE this repo itself
├── setup-hooks ← venv installer (uv-based, idempotent)
├── .venv/ ← gitignored, built by setup-hooks
│ └── bin/python ← used by lint hooks
├── hooks/ ← lifecycle hooks (.cjs Node)
│ ├── lib/ ← shared Node utilities
│ └── *.cjs
├── skills/ ← user-invocable skills (each has SKILL.md)
│ └── <skill-name>/
├── rules/ ← auto-loaded into every conversation
├── docs/ ← detailed standards (read on-demand)
├── lessons/ ← harness engineering reference library
└── statusline/ ← deployed by .setup_globalKey idea: the entire distribution is portable. Clone it anywhere, run setup-hooks, and every hook + skill works without external paths or system installs. The only host requirements are node, python3, and uv.
Skills catalog
Skills are user-invocable workflows with their own context and tool permissions. Trigger via /skill-name or natural language ("review this PR", "debug this", etc.). Each skill lives in skills/<name>/ with a SKILL.md defining its name, description, allowed tools, and proactive triggers.
Grouped below by purpose.
Planning & review
plan-ceo-review
CEO/founder-mode plan critique. Rethinks the problem from first principles, challenges premises, and proposes scope changes. Four modes:
- SCOPE EXPANSION — dream big, find the 10-star product
- SELECTIVE EXPANSION — hold scope, cherry-pick high-leverage additions
- HOLD SCOPE — maximum rigor, no scope creep
- SCOPE REDUCTION — strip to essentials
When to use: "think bigger", "expand scope", "is this ambitious enough", "rethink this", or before committing to a plan that feels small. Proactively suggested: when the user is questioning the ambition of a draft plan.
plan-eng-review
Engineering-manager plan review. Locks in execution: architecture, data flow, edge cases, test coverage, performance budgets. Walks the plan interactively with opinionated recommendations. Catches architectural issues before any code is written.
When to use: "review the architecture", "engineering review", "lock in the plan", before starting implementation on a non-trivial feature.
plan-design-review
Designer's-eye plan review. Rates each design dimension 0–10, explains what would make it a 10, then patches the plan to get there. Plan-mode only — for live UI audits use qa-design-review instead.
When to use: "review the design plan", "design critique", whenever a plan has UI/UX components.
decompose
Feature decomposer. Breaks high-level features into atomic specs with test steps, acceptance criteria, and dependency graphs. Different from a plan — produces FEATURE SPECS that subagents can execute independently.
When to use: before parallel/subagent implementation, when a feature is too vague to start coding.
Implementation safety
careful
Destructive command guardrail. Intercepts rm -rf, DROP TABLE, git push --force, git reset --hard, kubectl delete, etc. Each warning is overridable. Survives hostile autocomplete.
When to use: touching prod, debugging live systems, working in shared environments. Trigger with "be careful", "safety mode", "prod mode".
Mechanism: registers a PreToolUse(Bash) hook that runs bin/check-careful.sh before every shell command.
freeze
Edit-scope guardrail. Restricts Edit/Write to one directory per session. Blocks accidental edits outside the freeze boundary.
When to use: debugging in a large monorepo, scoping a refactor to one module, preventing "fix unrelated thing" drift. Trigger with "freeze this folder", "lock down edits".
Mechanism: registers a PreToolUse(Edit) hook that runs bin/check-freeze.sh.
checkpoint
Save/resume working state. Captures git state, decisions made, remaining work — so you can pick up exactly where you left off, even across Conductor workspace handoffs between branches.
When to use: ending a session, switching context, before a long break. Trigger with "checkpoint", "save progress", "where was I", "pick up where I left off". Proactively suggested: when a session is winding down or context is being switched.
investigate
Root-cause debugging. Four phases: investigate → analyze → hypothesize → implement. Iron Law: no fixes without a root cause identified first.
When to use: any bug report, 500 errors, stack traces, "it was working yesterday". The skill proactively claims these instead of letting Claude jump straight to a fix.
Pairs with: verify (afterward, to confirm the fix landed correctly).
Quality & verification
verify
Pre-completion checklist. Runs lint, intent-vs-diff match, basic quality gates before letting Claude claim "done". Proactively suggested before any "I'm done" announcement.
When to use: finishing a feature, before committing, before pushing. Catches the "looks done but lint is broken" failure.
qa
Full QA loop with fixes. Systematically tests a web application, finds bugs, then iteratively fixes them in source code, committing each fix atomically and re-verifying with screenshots. Three tiers: Quick (critical/high), Standard (+ medium), Exhaustive (+ cosmetic). Produces before/after health scores.
When to use: "QA this site", "find bugs", "test and fix", "does this work?".
qa-only
QA report without fixes. Same systematic test pass as qa, but never modifies code. Outputs a structured report with health score, screenshots, and repro steps.
When to use: when you want a bug list to triage manually, not auto-fix.
qa-design-review
Designer's-eye QA on a live site. Catches visual inconsistency, spacing issues, hierarchy problems, AI-slop patterns, slow interactions — then fixes them in source with atomic commits and before/after screenshots. (Aliased to /design-review.)
When to use: polishing a live site, "audit the design", "check if it looks good".
review
Pre-landing PR review. Diffs against the base branch and looks for SQL safety issues, LLM trust-boundary violations, conditional side effects, structural problems. Last line of defense before merge.
When to use: before merging, "review this PR", "check my diff".
entropy
Codebase health scan. Detects dead code, documentation drift, unused dependencies, pattern violations. Periodic maintenance tool — not a daily-use skill.
When to use: before major releases, when the codebase "feels messy", quarterly cleanup.
health
Code quality dashboard. Wraps existing project tools (type checker, linter, test runner, dead-code detector, shell linter), computes a weighted 0–10 composite score, and tracks trends over time across runs.
When to use: "health check", "code quality", "how healthy is the codebase", post-deploy validation.
Second opinion
codex
OpenAI Codex CLI wrapper. Three modes:
- review — independent diff review with pass/fail gate
- challenge — adversarial mode that tries to break your code
- consult — open-ended questions with session continuity
The "200 IQ autistic developer" second opinion, used when you want a non-Claude perspective on a hard problem or risky change.
When to use: before shipping a complex change, when stuck on architecture, when you want adversarial pressure-testing. Trigger with "codex review", "codex challenge", "ask codex", "second opinion".
Workflow & shipping
ship
End-to-end ship workflow. Detects + merges base branch, runs tests, reviews diff, bumps VERSION, updates CHANGELOG, commits, pushes, opens PR. One command from "code is ready" to "PR is open".
When to use: "ship", "deploy", "push to main", "create a PR".
document-release
Post-ship documentation update. Reads all project docs, cross-references the diff, updates README/ARCHITECTURE/CONTRIBUTING/CLAUDE.md to match what shipped, polishes CHANGELOG voice, cleans up TODOS, optionally bumps VERSION.
When to use: after a PR is merged, "update the docs", "sync documentation". Proactively suggested after any ship.
retro
Weekly engineering retrospective. Analyzes commit history, work patterns, and code-quality metrics with persistent history and trend tracking. Team-aware — breaks down per-person contributions with praise and growth areas.
When to use: end of week/sprint, "weekly retro", "what did we ship".
harness-tune
Self-improving harness. Analyzes conversation transcripts and session patterns to identify failure modes, then proposes specific harness improvements (new hooks, rules, or skills). The meta-skill that makes the rest of the distribution better over time.
When to use: after a frustrating session, repeated failures, periodically (monthly) to look for improvements.
Tooling
browse
Headless browser for QA + dogfooding. Navigate any URL, interact with elements, verify state, diff before/after actions, take annotated screenshots, check responsive layouts, test forms/uploads, handle dialogs. ~100 ms/command.
When to use: verifying a deployment, dogfooding a flow, filing a bug with evidence, any browser-based testing.
Used by: qa, qa-only, qa-design-review for screenshot capture.
generate-architecture-diagrams
Cloud architecture diagrams from text. Uses the Python diagrams library to produce PNG + Draw.io files for AWS / Azure / GCP / on-prem topologies.
When to use: any architecture diagram, infrastructure visualization, data-flow diagrams, ADR illustrations.
Research
research-pipeline
Paper discovery, citation chain exploration, and knowledge management for CV/ML researchers. Six CLI tools covering the full research workflow: multi-source search (Semantic Scholar + arXiv + OpenReview), citation chain exploration with saturation detection, progressive paper scanning (L0–L3), JSONL knowledge base, gap analysis (method × dataset matrix), and export (BibTeX, related work draft, CSV).
Architecture:
- APIs: Semantic Scholar (primary, 200M+ papers), arXiv (preprint freshness + PDFs), OpenReview (optional review scores)
- Storage: JSONL files —
index.jsonl(~100 tok/paper),graph.jsonl(citation edges),queue.jsonl(exploration queue),stats.jsonl(session logs) - Scoring: Free signals (citation count, venue tier, recency, field match) weighted 0.0–1.0, plus LLM relevance prompt generation
- Scanning: Progressive disclosure L0 (30 tok) → L1 (200 tok) → L2 (2000 tok) → L3 (8000 tok) with auto-promotion thresholds
- Exploration funnel: 20 abstracts → 10 scored → 3 deep read per iteration; depth cap 3 hops; saturation at >70% overlap
Setup:
cd skills/research-pipeline
bash scripts/setup.sh # Install deps (requests==2.32.5, lxml==4.9.3), create papers-kb/
bash scripts/env-check.sh # Verify READY
export S2_API_KEY=your_key # Optional: 300x faster S2 rate limits (10k/5min vs 100/5min)Typical workflow:
# 1. Find seed papers
python scripts/search.py --query "3D object detection" --venues CVPR ICLR --year-from 2023 --seed
# 2. Explore citation chains (interactive — pauses each iteration for review)
python scripts/explore.py --topic "3D object detection" --iterations 3 --max-depth 3
# 3. Analyze gaps
python scripts/gaps.py --kb papers-kb
# 4. Export for LaTeX
python scripts/export.py bibtex --output refs.bib
python scripts/export.py related-work --topic "3D detection" --output related.mdPipeline routing (how Claude decides which tool to invoke):
| User says | Route to | Key args |
|-----------|----------|----------|
| "Find papers about X" | search.py | --query, --venues, --year-from, --seed |
| "Explore citations of X" | explore.py | --topic, --iterations, --max-depth |
| "Scan these papers at level N" | scan.py | --level (l0/l1/l2/l3/auto), --topic |
| "What's in my KB?" | kb.py | stats, query --text, query --venue |
| "Update paper status" | kb.py | update --id ID --status read |
| "What gaps exist?" | gaps.py | --methods, --datasets, --auto-detect |
| "Export for LaTeX" | export.py bibtex | --output, --venue, --year-from |
| "Draft related work" | export.py related-work | --topic, --output |
| "Export as spreadsheet" | export.py csv | --output |
Progressive scanning levels (token budget per paper):
| Level | ~Tokens | Content included | When to use | |-------|---------|------------------|-------------| | L0 | 30 | title, venue, year, citation count | Triage a batch of 50+ papers | | L1 | 200 | + abstract first sentence + free-signal score | Filter 20 → 10 | | L2 | 2,000 | + full abstract + LLM scoring prompt | Score 10 → 3 candidates | | L3 | 8,000 | + key contributions + method + results template | Deep read top 3 |
Auto-promote thresholds: free score ≥ 0.3 → L1, ≥ 0.5 → L2, ≥ 0.7 → L3 candidate.
Scoring signals (weighted, no paid API needed):
| Signal | Weight | Formula | |--------|--------|---------| | Citation count | 0.25 | log-normalized against batch max | | Influential citations | 0.15 | log-normalized against batch max | | Venue tier | 0.25 | CVPR/ICLR/ICML/NeurIPS/ECCV = 1.0, workshop = 0.3, arXiv = 0.5 | | Recency | 0.15 | 1.0 − (current_year − year) / 10, clamped [0,1] | | Field match | 0.20 | 1.0 if CS, 0.5 if related, 0.3 default |
Knowledge base structure:
papers-kb/
├── index.jsonl # Paper metadata (~100 tokens/entry, git-friendly)
├── queue.jsonl # Exploration queue with priority + depth tracking
├── graph.jsonl # Citation edges (from_id → to_id, direction)
├── stats.jsonl # Append-only session logs for saturation tracking
├── topics/ # Generated Markdown synthesis files per topic
└── papers/ # Downloaded PDFs (arXiv only, path-validated)Exploration behavior:
- Interactive mode (default): after each iteration, prints a summary and waits —
y/Enter to continue,n/qto stop,sto show detailed scores - Saturation detection: warns at >70% overlap, auto-stops at <15% novelty
- Depth cap: papers at depth ≥
--max-depth(default 3) are not queued further - Dry-run:
--dry-runshows what would happen without modifying KB
Security model:
- No
eval/execanywhere in the codebase - Safe XML parsing:
lxmlwithresolve_entities=False,no_network=True - All file I/O paths validated via
path_guard.py(prevents traversal) - API keys read from env vars only, never logged or written to files
- HTTP timeouts (30s) on all requests; rate limiters on all API clients
- Atomic JSONL writes (temp file +
os.replace) prevent corruption on crash - Pinned dependency versions (
requests==2.32.5,lxml==4.9.3)
When to use: "find papers about X", "literature review", "explore citations", "what's the gap in X research", "export bibliography", "related work for my paper on X".
Triggers: find papers, search papers, literature review, citation chain, explore citations, gap analysis, related work, paper discovery.
Dependencies: Python 3.9+, requests, lxml. No paid API keys required (S2 key optional for faster rate limits).
Document generation
minimax-docx
Professional DOCX creation, editing, and formatting. Three pipelines: create from scratch, fill/edit existing documents, apply template formatting. Uses python-docx + lxml. Compliant with ECMA-376, GB/T 9704-2012, and major style guides (IEEE, ACM, APA, MLA, Chicago).
When to use: "write a report", "draft a proposal", "make a contract", "fill this form", "reformat to match template", or any task whose output is a .docx file.
minimax-pdf
PDF generation and manipulation. Create professional PDF documents with formatting, tables, charts, and layout control.
When to use: "generate a PDF", "create a report as PDF", or any task requiring PDF output.
minimax-xlsx
Excel spreadsheet creation and manipulation. Create formatted spreadsheets with formulas, charts, pivot-table-ready layouts, and multi-sheet workbooks.
When to use: "create a spreadsheet", "make an Excel file", "export data to xlsx".
pptx-generator
PowerPoint presentation creation. Generate slide decks with layouts, themes, charts, and speaker notes.
When to use: "create a presentation", "make slides", "build a deck".
Hooks catalog
Hooks fire automatically on Claude Code lifecycle events. They're either Node CommonJS (.cjs) or Bash (.sh), wired in settings.json. Most hooks are non-blocking (inject context); a few are blocking (refuse the operation until conditions are met).
Grouped by lifecycle event.
SessionStart — runs once per session
session-init.cjs
Initializes session environment. Detects project type (Node/Python/Go/etc.), language versions, OS, git state. Persists this to environment variables that downstream hooks consume. The "boot" of the harness for a new session.
Why it matters: every other hook reads project info from env vars set here. This is the one place where heavy detection runs.
Crash-safe: wrapped in try/catch with crash logging to hooks/.logs/hook-log.jsonl.
Reuse: core logic in lib/project-detector.cjs.
inject-rules.cjs
Loads rules into context. Reads every .md in rules/ and outputs their content so Claude has them in context from message #1 of the session. This is how the global rules stay binding without manual @-imports.
Cost: every line in rules/*.md is loaded into every session. Keep rules/ lean — detailed standards belong in docs/.
SubagentStart — runs when a Task tool spawns a subagent
subagent-init.cjs
Injects minimal context to subagents. Optimized to ~200 tokens (down from ~350) by reading env vars set in session-init.cjs instead of re-detecting everything. Keeps subagent context windows clean.
Why it matters: subagents start fresh and don't see the parent's CLAUDE.md. This hook gives them just enough orientation to work.
UserPromptSubmit — runs every time the user sends a message
token-efficiency-reminder.cjs
Injects token discipline rules. Reminds Claude to match effort to request complexity — short questions get short answers, no preambles, no padding. Counter-balances the model's natural tendency toward verbosity.
Scenario: user asks "what time is it?" — without this hook, Claude might preamble with "Let me check that for you...". With it, the response is the time.
dev-rules-reminder.cjs
Injects session context + rules + Plan Context. On every prompt, re-injects the workflow rules and the active plan path (if any) so Claude doesn't drift mid-session. Reads static env from session-init.cjs to avoid recomputation.
Plan Context: if a plan is active in plans/*/, this hook injects a ## Plan Context section pointing to it — used by /preview --diagram and others to know where to save artifacts.
Reuse: core logic in lib/context-builder.cjs.
usage-context-awareness.cjs
Tracks Claude Code usage limits. Polls the Anthropic OAuth API for current usage/rate-limit state, caches it (60 s TTL), and writes to a file the statusline + context-builder read.
Why it matters: lets Claude proactively warn when running low on quota, switch to cheaper modes, or surface the limit in the statusline. Cost control: throttled (1 min for prompts, 5 min for tool use) so it doesn't hammer the OAuth endpoint.
PreToolUse — runs before specific tool calls
guard-task.cjs (matcher: Task)
Forces approval before subagent spawning. Subagents are expensive — each one starts a fresh context window and burns tokens on initialization. This hook makes the user explicitly approve each Task call.
Scenario: Claude wants to delegate a "small" task to a subagent that could be done inline. The hook blocks until the user says "yes spawn it" or "no do it yourself".
descriptive-name.cjs (matcher: Write)
Injects file-naming guidance. On every Write (new file creation), injects naming conventions: kebab-case for JS/TS/Python/shell, PascalCase for C#/Java/Kotlin/Swift, snake_case for Go/Rust. Goal: self-documenting names so Grep/Glob/Search find the right files.
Non-blocking: just adds context; the Write proceeds.
enforce-doc-rules.cjs (matcher: Edit|Write|MultiEdit)
Enforces documentation conventions on docs/ edits. Fires when editing docs/*.md. Injects a reminder to follow rules/doc-*.md conventions (structure, headings, voice).
Scenario: Claude is updating docs/plans/feature-x.md — this hook makes sure it follows the existing doc voice, not invent a new one.
loop-detection.cjs (matcher: Edit|Write|MultiEdit)
Warns on doom loops. Tracks per-file edit counts in /tmp. After a threshold (e.g. 5 edits to the same file in one session), injects a warning to step back and re-investigate instead of looping.
Scenario: Claude keeps tweaking the same function trying to make a test pass — the hook nudges it to re-read the test, not patch blindly.
Cleanup: tracking files removed by session-cleanup.cjs at session end.
scout-block.cjs (matcher: Bash|Glob|Grep|Read|Edit|Write)
Blocks access to ignored directories. Reads .claude/.ckignore (gitignore-spec compliant patterns). Blocks cd node_modules, cat dist/file.js, etc. — but allows build commands like npm build, cargo build, terraform, kubectl.
Why: Claude burns tokens reading node_modules/ and dist/ artifacts. This hook makes those directories invisible to all read tools, while keeping the build pipeline functional.
Override: edit .claude/.ckignore; supports ! negation.
Reuse: logic in lib/scout-checker.cjs.
privacy-block.cjs (matcher: Bash|Glob|Grep|Read|Edit|Write)
Blocks access to sensitive files. Stops Read .env, credentials.json, etc. unless the tool call uses an APPROVED: prefix that the LLM can only add after asking the user.
Flow:
- LLM tries
Read .env→ BLOCKED - LLM asks user for permission
- User approves
- LLM retries
Read APPROVED:.env→ ALLOWED
Why it matters: prevents accidental leakage of secrets into context (and therefore into transcripts).
Reuse: logic in lib/privacy-checker.cjs.
PostToolUse — runs after specific tool calls
build-sensor.cjs (matcher: Edit|Write|MultiEdit)
Auto-runs the project build/compile after code edits. Detects the project's build command from session-init.cjs env vars and runs it in the background. Context-efficient: swallows success silently, surfaces ONLY failures back to Claude.
Scenario: Claude edits a TypeScript file → hook runs tsc --noEmit → if it fails, the error lands in Claude's next turn so it can fix immediately. If it passes, Claude doesn't know it ran.
Why it matters: closes the "edited code without checking it compiles" feedback gap.
Stop — runs when Claude is about to finish a turn
stop-verify.cjs
Pre-completion verification. Runs lint and intent-vs-diff match. Prompt-based — outputs a decision Claude must act on rather than blocking outright. Uses stop_hook_active to prevent infinite loops.
Scenario: Claude says "done" → hook checks lint → if broken, injects a "fix lint first" message → Claude fixes → tries to stop again → hook sees stop_hook_active=true → lets it through.
Timeout: 45 s.
SessionEnd — runs once at session end
session-cleanup.cjs
Cleans up /tmp state. Removes the per-file edit counters that loop-detection.cjs writes during the session. Best-effort — never fails the session even if cleanup errors.
Rules
Every .md file in rules/ is auto-loaded into context at session start via inject-rules.cjs. Every line costs context capacity — keep rules lean, detailed standards live in docs/.
| File | Purpose |
|---|---|
| development-rules.md | YAGNI/KISS/DRY, file naming, file size limits, code quality bars |
| primary-workflow.md | Plan → implement → test → review pipeline |
| documentation-management.md | Roadmap/changelog/architecture maintenance, plan structure |
| agent-context-architecture.md | When to delegate, sequential vs parallel, work context paths |
| claude-md-conventions.md | How to write project CLAUDE.md files |
| git-safety.md | Destructive-action rules, never --no-verify, commit hygiene |
| cost-awareness.md | Token budgets, when to use cheap vs heavy models |
| research.md | Research workflow and source priority |
| python/ | Python-specific conventions |
Lessons
lessons/ is a harness engineering reference library — 11 factor files synthesizing patterns from Claude Code internals (~513K LOC), Bilgin Ibryam's 12 agentic harness patterns, and the claw-code research project. Each file is a self-contained study of one dimension of agentic system design.
| File | Covers |
|---|---|
| memory.md | Tiered memory, extraction, dream consolidation, decay, team memory |
| context.md | 5-layer compaction, scoped assembly, dynamic boundary, instruction budgets |
| workflow.md | Explore-plan-act, coordinator restriction, phase permissions, main loop |
| multi-agent.md | Subagent/coordinator/fork patterns, verification agents, composability |
| permissions.md | 6-layer classification, dangerous patterns, AST analysis, denial tracking |
| tools.md | Single-purpose design, concurrency partitioning, streaming execution |
| recovery.md | Graduated recovery, circuit breakers, death-spiral prevention |
| hooks.md | 26 lifecycle events, fail-open semantics, hook anatomy, matchers |
| extensions.md | Skills vs tools, plugin sandbox, supply-chain security, trust tiers |
| patterns.md | Async generators, derived flags, modifier chains, speculative execution |
| production.md | Observability, testing, cost caps, session resume, prompt injection |
Each file ends with Takeaways (durable principles), Anti-patterns (what to avoid), and What this repo does (how the distribution implements the pattern). See lessons/README.md for the full guide.
Customization
Project-specific permissions
Create .claude/settings.local.json (gitignored) for permission overrides that shouldn't sync back:
{
"permissions": {
"allow": [
"Read(/path/to/private/data/**)",
"Bash(some-internal-tool*)"
]
}
}Disabling individual hooks
Each .cjs hook checks isHookEnabled('hook-name') from lib/ck-config-utils.cjs. Disable a hook by adding to .claude/.ckconfig.json:
{
"hooks": {
"build-sensor": false,
"loop-detection": false
}
}Tuning scout-block
Edit .claude/.ckignore (gitignore syntax). Default blocks node_modules/, dist/, .venv/, etc. Use ! to allow specific paths back in.
Conventions
- Hook scripts:
.cjs(CommonJS) for Node. Shared utilities inhooks/lib/. - Rules: kebab-case
.mdfiles, auto-loaded into every conversation. Keep lean. - Docs: detailed standards live in
docs/, read on-demand. Rules are the map; docs are the territory. - Skills: one directory per skill, each with
SKILL.mddefining frontmatter (name, description, allowed-tools, optional hooks). - Settings:
settings.jsonshared (committed);settings.local.jsonmachine-specific (gitignored, additive). $CLAUDE_PROJECT_DIRresolves to the target project root, NOT this repo. When this distribution is symlinked as.claude/,$CLAUDE_PROJECT_DIR/.claude/is the path back to the distribution.- Hook
matcherpatterns are regex —*means match-all,|separates alternatives. - Crash safety: every
.cjshook is wrapped in a crash try/catch that logs tohooks/.logs/hook-log.jsonland exits 0 (fail-open). Hooks must never break the session.
Gotchas
.venv/is gitignored — every machine must runsetup-hooksonce after cloning.uvis required —setup-hookswon't run without it. No fallback topipon purpose (uv is faster and lockfile-aware).- Two settings.local.json locations exist:
cc-distribution/settings.local.json(root) — applies in target projects when this distribution is symlinkedcc-distribution/.claude/settings.local.json— applies when working inside the distribution repo itself
- Skill name vs directory name:
qa-design-review/directory has a SKILL.md withname: design-review— invoked as/design-review, not/qa-design-review. - Rules cost tokens forever — every line in
rules/*.mdis in every session. Audit periodically.
