syntropic
v0.9.5
Published
Ship better software with a proven development methodology. Audit your git history, install disciplined rules, and track iterations — for Claude Code, Cursor, Windsurf, GitHub Copilot, and OpenAI Codex.
Maintainers
Readme
syntropic
Ship better software. Prove it on your own code first.
A development methodology for AI-assisted coding. Works with Claude Code, Cursor, Windsurf, GitHub Copilot, and OpenAI Codex.
Try Before You Install
npx syntropic auditAnalyses your last 20 git commits for process issues — localhost references, env files in git, commit discipline, fix ratios, governance docs. No install needed. Nothing sent anywhere. See what the methodology catches on your code.
Install
npx syntropic initYou get: a disciplined development pipeline, governance doc templates, and a connection to the PRISM methodology network — so your rules stay current as the methodology evolves.
What You Get
Evergreen Rules — battle-tested development rules fetched fresh at every cycle:
| Rule | What it prevents | Real-world save | |------|-----------------|-----------------| | EG1: Pre-flight | Localhost refs, broken builds reaching production | Caught 3 localhost references pre-deploy | | EG7: Pipeline | AI jumping to code without research or planning | Structured 123 features across Full/Lightweight/Minimum cycles | | EG8: Test first | Untested changes reaching production users | Every fix caught on test page — 12 verified promotions | | EG10: Env hygiene | Env var corruption (trailing newlines, wrong formats) | Prevented silent API failures across 6 providers | | EG11: Prod sync | Access control divergence between test and production | Caught lockout bug before 8+ users were affected | | EG14: Doc discipline | Lost decisions, repeated mistakes, no project memory | Backlog, issues, and ADRs updated every cycle |
Governance Docs — project management templates your AI assistant reads before every cycle:
.claude/docs/NORTH_STAR.md Your vision, goals, and design principles
.claude/docs/BACKLOG.md Prioritised work with status tracking
.claude/docs/ISSUES.md Bug log and iteration tracker
.claude/docs/adr/ Architectural Decision RecordsPipeline Agents — research, dev, plan, qa, devops, security — fetched from the PRISM network so they're always up to date.
Health Check — daily GitHub Action with auto-remediation (npm audit fix PRs).
Track Record
Built in production on a real product (27 AI agents, 8-lens analyser, CLI, 7 free tools) over 10 weeks:
- 123 features shipped, each through a structured pipeline
- 12 test→production promotions — every feature verified on test page first
- 3 bug classes prevented by specific EG rules before reaching users
- 0 undetected production incidents after methodology adoption
Trust & Privacy
Everything is local. Your .claude/ directory — rules, agents, governance docs — lives in your repo. It's yours.
Telemetry is metadata only. PRISM learns from anonymous structural metadata about development cycles — never from your code, file contents, or project details. This metadata is what drives methodology improvements: which cycle weights work best for which project shapes, where pipelines break down, what patterns lead to first-pass success.
What's collected: cycle weight, phases run, success/failure, tool used, OS, framework detected, file count bucket (1-5 / 6-20 / 21+), governance doc presence. Never collected: file contents, file names, code, diffs, commit messages, project names, identity, API keys.
Why it matters. The PRISM network grows exponentially with contributors. Every anonymous report adds signal — which cycle weights work for which project shapes, where pipelines break down, what patterns lead to first-pass success. More contributors = better data = smarter rules for everyone.
Contributors get more. When telemetry is enabled, your PRISM sync fetches live network benchmarks — real success rates, iteration counts, and pattern intelligence computed from aggregate data across all contributors. This is how the methodology caught 3 classes of production bugs before they reached users, and how 12 test-to-production promotions were verified clean. When telemetry is disabled, you still get the full EG rules and agents, but benchmarks are static baselines only — you lose the network learning that makes rules smarter over time. No account needed — your identity is a double-hashed device fingerprint (pseudonymisation details).
| | Community (telemetry off) | Contributor (telemetry on) | |---|---|---| | EG rules | Full | Full | | Pipeline agents | Full | Full | | Governance docs | Full | Full | | Benchmarks | Static baselines | Live network data (success rates, iteration counts, failure patterns) | | Pattern intelligence | Frozen snapshot | Updated from aggregate reports (which cycle weights work for your stack) | | Cycle recommendations | Generic defaults | Tuned by real data (upgrade/downgrade triggers from contributor patterns) | | Network benefit | None | Your reports improve rules for everyone — including you |
Default: on. Opt out anytime:
syntropic telemetry disableClean exit. Remove everything syntropic added (preserves your governance docs):
syntropic removeInspect the output. Run npm pack on the package — everything that ships is plain text templates. The methodology is served from a public API endpoint you can curl yourself.
Commands
syntropic audit # Analyse git history (no install needed)
syntropic init [project-name] # Install methodology + docs
syntropic add cursor windsurf # Add tools to existing project
syntropic remove # Clean uninstall (preserves docs)
syntropic health # Run pre-flight checks
syntropic report # Submit anonymous cycle report
syntropic telemetry status # Check telemetry setting
syntropic analyse # Deep-dive analysis (8 philosophy lenses)How It Works
Your instruction file contains a bootstrap rule (EG13) that fetches the full methodology from the PRISM network at cycle start. Rules, agents, and patterns are served fresh — always current, zero maintenance. Your LLM does all compute. Syntropic serves the methodology but runs zero inference.
The governance docs give your AI assistant project memory — it checks priorities before coding and logs issues after. Over time, your docs become a structured record of every decision, bug, and iteration.
Existing Projects
Running syntropic init in a project that already has a CLAUDE.md or .cursorrules appends the methodology — never overwrites. A <!-- syntropic --> marker prevents duplicates.
Research
The methodology is grounded in published research: zenodo.org/records/17894441
Links
- Website
- Intelligent Analyser — deep-dive product analysis through 8 philosophy lenses
License
MIT
