opencode-sdlc-wizard
v0.9.1
Published
SDLC enforcement for OpenCode CLI — privacy-first, any-backend portability with a four-tier backend picker plus an OSS-tier cross-model-review skill so the full SDLC loop can run with zero Anthropic+OpenAI lock-in. Ships JSON Schemas for review artifacts
Downloads
523
Maintainers
Readme
OpenCode SDLC Wizard
Status: v0.8.1 (codex round-1 fixes — Google tier mismatch, canonical env names, cost-ladder freshness, validator addProps — on top of v0.8.0's free-tier-first picker + cost ladder + 5 new providers + schemas + validator + full template set + check subcommand) — 2026-05-05. Install with
npx opencode-sdlc-wizard init, check upstream withnpx opencode-sdlc-wizard check. Full SDLC loop is any-backend on both coder AND reviewer (zero Anthropic+OpenAI lock-in possible); detector now picks up free-tier-friendly providers (Cerebras free, NVIDIA NIM credits, Groq free, Google AI Studio quota) and the new--free-tier-firstflag biases recommendations toward $0/mo paths. Seedocs/cost-ladder.mdfor the $0 / $20 / $200 budget breakdown. Phase B (backend matrix proof) and Phase C (hardware scout) deferred to follow-up releases. SeeHANDOFF.mdfor architecture decisions,PRIVACY.mdfor the tier model, andCHANGELOG.mdfor release notes.
SDLC enforcement for sst/opencode — the
privacy-first, any-backend agent CLI. This wizard ports the same plan → TDD
→ self-review enforcement pattern from the Claude / Codex siblings into the
OpenCode runtime, so users can get SDLC discipline against whatever model
backend their privacy / compliance constraints allow — not just Anthropic.
Supported backends OpenCode already speaks (and we'll inherit):
- Local: Ollama, LM Studio, llama.cpp, vLLM, MLX (Apple Silicon)
- Enterprise: Azure OpenAI, AWS Bedrock, internal AI gateways
- Hosted OSS: Together, Groq, OpenRouter, Cerebras, DeepSeek-direct, NVIDIA NIM
- Standard: OpenAI, Anthropic, Google AI Studio (Gemini)
For a concrete cost-vs-capability map across these — including
$0/mo, $20/mo, $200/mo budget paths and which model fits which job —
see docs/cost-ladder.md.
XDLC Ecosystem (Sibling Projects)
This wizard is one of four sibling projects. Same enforcement philosophy, different agent / domain:
| Package | Agent / Domain | What It Does |
|---------|----------------|--------------|
| agentic-sdlc-wizard (repo) | Claude Code / SDLC | Plan → TDD → self-review for code, with hooks + skills + CI scoring |
| codex-sdlc-wizard (repo) | OpenAI Codex / SDLC | Same SDLC enforcement, ported to Codex CLI (writes .codex/ + AGENTS.md) |
| claude-gdlc-wizard (repo) | Claude Code / GDLC | Game Development Life Cycle — persona-driven playtest cycles, triangulated findings, ratchet-only-tightens |
| opencode-sdlc-wizard (this repo) | OpenCode / SDLC | Same SDLC enforcement, ported to OpenCode (writes .opencode/). Privacy-first, any-backend. |
All four are part of the broader XDLC ecosystem — generalized lifecycle enforcement across agents and domains.
Roadmap
Tracked as ROADMAP #9 in the parent repo:
BaseInfinity/claude-sdlc-wizard/ROADMAP.md.
Three phases:
- Phase A (current target): port hooks + skills + install.sh from Claude / Codex pattern. Ship v0.1.0.
- Phase B: backend matrix proof — run E2E SDLC scenario across local (Ollama + Qwen-Coder), enterprise (Azure OpenAI), hosted OSS (Together/Groq), and Anthropic baselines. Document which backends hold SDLC compliance.
- Phase C: hardware scout for the local-tier compute requirement (gaming laptop / Windows laptop / $200–400 rig / cloud GPU rental).
Capability floor
"Just works on every LLM" is the dream but not the spec. Small local models (7–13B) are expected to fail the full plan → TDD → self-review protocol — instruction-following, long-context reasoning, and tool-use are all load-bearing. The 30B+ code-tuned class (Qwen-Coder, DeepSeek-Coder) is the likely local sweet spot. A failed run on an undersized model is a capability result, not a port bug.
Install
From a target repo's root, the easiest path:
npx opencode-sdlc-wizard initThat's it. Equivalent to the longer manual form:
git clone https://github.com/BaseInfinity/opencode-sdlc-wizard /tmp/opencode-sdlc-wizard
bash /tmp/opencode-sdlc-wizard/install.shBoth paths are supported. npx is preferred for first-time installs;
git clone + install.sh is preferred when you want to inspect the bundle
before merging it. Re-run with --force to overwrite customizations,
--dry-run to preview without writing.
This non-destructively merges the wizard into your .opencode/:
.opencode/plugins/sdlc-wizard.js(the OpenCode plugin shim).opencode/hooks/*.sh(5 portable bash hooks).opencode/scripts/{detect,configure}-backend.sh(privacy-first picker).opencode/skills/{sdlc,setup-wizard,update-wizard,feedback}/SKILL.mdAGENTS.mdandPRIVACY.mdat repo root
Existing customizations are preserved. Re-run with --force to overwrite.
Native npx opencode-sdlc-wizard init shipped in v0.3.0. The bash
installer remains the inspection/scripting path.
Pick a backend (privacy-first)
# See what's reachable from this machine (privacy-first cascade)
bash .opencode/scripts/detect-backends.sh
# Or bias toward free-tier providers (NVIDIA NIM, Cerebras, Groq,
# Google AI Studio) before paid hosted/proprietary
bash .opencode/scripts/detect-backends.sh --free-tier-first
# Configure the highest-privacy tier you can use
bash .opencode/scripts/configure-backend.sh \
--tier private_local --provider ollama \
--model qwen3-coder:30b
# Or for a $0/mo free-tier setup (Cerebras free, sub-second inference):
bash .opencode/scripts/configure-backend.sh \
--tier hosted_oss --provider cerebras \
--model llama-3.3-70bFour tiers, ordered by where your prompts travel:
| Tier | Travels to | Examples |
|------|------------|----------|
| private_local | Stays on your machine | Ollama, LM Studio, llama.cpp, vLLM |
| enterprise | Your tenant | Azure OpenAI, AWS Bedrock |
| hosted_oss | Third-party host | Together, Groq, OpenRouter |
| proprietary | Vendor (Anthropic/OpenAI) | Claude, GPT |
Detector probes PATH + env vars only — no network calls. Configurator merges
non-destructively into opencode.json and refuses to clobber an existing
model pin without --force. See PRIVACY.md for the
Ollama walkthrough and verification checklist.
Tests
bash tests/test-bundle-integrity.sh # bundle correctness
bash tests/test-plugin-shim.sh # plugin ESM + bash hook validity
bash tests/test-install.sh # installer non-destructive behavior
bash tests/test-backend-picker.sh # detect/configure-backend behavior
bash tests/test-cli.sh # npx CLI wrapper
bash tests/test-cross-model-review.sh # OSS-tier reviewer skill + script
bash tests/test-domain-templates.sh # TESTING.md domain templates
bash tests/test-bundle-drift.sh # bundle drift / mirror guards
bash tests/test-check-cli.sh # check subcommand + staleness
bash tests/test-doc-templates.sh # SDLC.md + ARCHITECTURE.md templates
bash tests/test-review-schemas.sh # JSON Schemas + validatorOr npm test runs all eleven (270 tests).
Known limitations
- No
UserPromptSubmitanalog in OpenCode. SDLC BASELINE moves to AGENTS.md (loaded once per session) instead of repeating per prompt. - Phase A only. Backend matrix proof (Phase B) and hardware scout (Phase C) deferred. The wizard installs and runs against any OpenCode backend; we just haven't yet measured SDLC-compliance scores across backends statistically.
- No upstream-sync workflow yet. Updates from the parent
claude-sdlc-wizardare manual. Future releases will mirror the Codex sibling's.github/workflows/upstream-sync.ymlpattern.
