npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

opencode-sdlc-wizard

v0.9.1

Published

SDLC enforcement for OpenCode CLI — privacy-first, any-backend portability with a four-tier backend picker plus an OSS-tier cross-model-review skill so the full SDLC loop can run with zero Anthropic+OpenAI lock-in. Ships JSON Schemas for review artifacts

Downloads

523

Readme

OpenCode SDLC Wizard

Status: v0.8.1 (codex round-1 fixes — Google tier mismatch, canonical env names, cost-ladder freshness, validator addProps — on top of v0.8.0's free-tier-first picker + cost ladder + 5 new providers + schemas + validator + full template set + check subcommand) — 2026-05-05. Install with npx opencode-sdlc-wizard init, check upstream with npx opencode-sdlc-wizard check. Full SDLC loop is any-backend on both coder AND reviewer (zero Anthropic+OpenAI lock-in possible); detector now picks up free-tier-friendly providers (Cerebras free, NVIDIA NIM credits, Groq free, Google AI Studio quota) and the new --free-tier-first flag biases recommendations toward $0/mo paths. See docs/cost-ladder.md for the $0 / $20 / $200 budget breakdown. Phase B (backend matrix proof) and Phase C (hardware scout) deferred to follow-up releases. See HANDOFF.md for architecture decisions, PRIVACY.md for the tier model, and CHANGELOG.md for release notes.

SDLC enforcement for sst/opencode — the privacy-first, any-backend agent CLI. This wizard ports the same plan → TDD → self-review enforcement pattern from the Claude / Codex siblings into the OpenCode runtime, so users can get SDLC discipline against whatever model backend their privacy / compliance constraints allow — not just Anthropic.

Supported backends OpenCode already speaks (and we'll inherit):

  • Local: Ollama, LM Studio, llama.cpp, vLLM, MLX (Apple Silicon)
  • Enterprise: Azure OpenAI, AWS Bedrock, internal AI gateways
  • Hosted OSS: Together, Groq, OpenRouter, Cerebras, DeepSeek-direct, NVIDIA NIM
  • Standard: OpenAI, Anthropic, Google AI Studio (Gemini)

For a concrete cost-vs-capability map across these — including $0/mo, $20/mo, $200/mo budget paths and which model fits which job — see docs/cost-ladder.md.

XDLC Ecosystem (Sibling Projects)

This wizard is one of four sibling projects. Same enforcement philosophy, different agent / domain:

| Package | Agent / Domain | What It Does | |---------|----------------|--------------| | agentic-sdlc-wizard (repo) | Claude Code / SDLC | Plan → TDD → self-review for code, with hooks + skills + CI scoring | | codex-sdlc-wizard (repo) | OpenAI Codex / SDLC | Same SDLC enforcement, ported to Codex CLI (writes .codex/ + AGENTS.md) | | claude-gdlc-wizard (repo) | Claude Code / GDLC | Game Development Life Cycle — persona-driven playtest cycles, triangulated findings, ratchet-only-tightens | | opencode-sdlc-wizard (this repo) | OpenCode / SDLC | Same SDLC enforcement, ported to OpenCode (writes .opencode/). Privacy-first, any-backend. |

All four are part of the broader XDLC ecosystem — generalized lifecycle enforcement across agents and domains.

Roadmap

Tracked as ROADMAP #9 in the parent repo: BaseInfinity/claude-sdlc-wizard/ROADMAP.md.

Three phases:

  • Phase A (current target): port hooks + skills + install.sh from Claude / Codex pattern. Ship v0.1.0.
  • Phase B: backend matrix proof — run E2E SDLC scenario across local (Ollama + Qwen-Coder), enterprise (Azure OpenAI), hosted OSS (Together/Groq), and Anthropic baselines. Document which backends hold SDLC compliance.
  • Phase C: hardware scout for the local-tier compute requirement (gaming laptop / Windows laptop / $200–400 rig / cloud GPU rental).

Capability floor

"Just works on every LLM" is the dream but not the spec. Small local models (7–13B) are expected to fail the full plan → TDD → self-review protocol — instruction-following, long-context reasoning, and tool-use are all load-bearing. The 30B+ code-tuned class (Qwen-Coder, DeepSeek-Coder) is the likely local sweet spot. A failed run on an undersized model is a capability result, not a port bug.

Install

From a target repo's root, the easiest path:

npx opencode-sdlc-wizard init

That's it. Equivalent to the longer manual form:

git clone https://github.com/BaseInfinity/opencode-sdlc-wizard /tmp/opencode-sdlc-wizard
bash /tmp/opencode-sdlc-wizard/install.sh

Both paths are supported. npx is preferred for first-time installs; git clone + install.sh is preferred when you want to inspect the bundle before merging it. Re-run with --force to overwrite customizations, --dry-run to preview without writing.

This non-destructively merges the wizard into your .opencode/:

  • .opencode/plugins/sdlc-wizard.js (the OpenCode plugin shim)
  • .opencode/hooks/*.sh (5 portable bash hooks)
  • .opencode/scripts/{detect,configure}-backend.sh (privacy-first picker)
  • .opencode/skills/{sdlc,setup-wizard,update-wizard,feedback}/SKILL.md
  • AGENTS.md and PRIVACY.md at repo root

Existing customizations are preserved. Re-run with --force to overwrite.

Native npx opencode-sdlc-wizard init shipped in v0.3.0. The bash installer remains the inspection/scripting path.

Pick a backend (privacy-first)

# See what's reachable from this machine (privacy-first cascade)
bash .opencode/scripts/detect-backends.sh

# Or bias toward free-tier providers (NVIDIA NIM, Cerebras, Groq,
# Google AI Studio) before paid hosted/proprietary
bash .opencode/scripts/detect-backends.sh --free-tier-first

# Configure the highest-privacy tier you can use
bash .opencode/scripts/configure-backend.sh \
     --tier private_local --provider ollama \
     --model qwen3-coder:30b

# Or for a $0/mo free-tier setup (Cerebras free, sub-second inference):
bash .opencode/scripts/configure-backend.sh \
     --tier hosted_oss --provider cerebras \
     --model llama-3.3-70b

Four tiers, ordered by where your prompts travel:

| Tier | Travels to | Examples | |------|------------|----------| | private_local | Stays on your machine | Ollama, LM Studio, llama.cpp, vLLM | | enterprise | Your tenant | Azure OpenAI, AWS Bedrock | | hosted_oss | Third-party host | Together, Groq, OpenRouter | | proprietary | Vendor (Anthropic/OpenAI) | Claude, GPT |

Detector probes PATH + env vars only — no network calls. Configurator merges non-destructively into opencode.json and refuses to clobber an existing model pin without --force. See PRIVACY.md for the Ollama walkthrough and verification checklist.

Tests

bash tests/test-bundle-integrity.sh   # bundle correctness
bash tests/test-plugin-shim.sh        # plugin ESM + bash hook validity
bash tests/test-install.sh            # installer non-destructive behavior
bash tests/test-backend-picker.sh     # detect/configure-backend behavior
bash tests/test-cli.sh                # npx CLI wrapper
bash tests/test-cross-model-review.sh # OSS-tier reviewer skill + script
bash tests/test-domain-templates.sh   # TESTING.md domain templates
bash tests/test-bundle-drift.sh       # bundle drift / mirror guards
bash tests/test-check-cli.sh          # check subcommand + staleness
bash tests/test-doc-templates.sh      # SDLC.md + ARCHITECTURE.md templates
bash tests/test-review-schemas.sh     # JSON Schemas + validator

Or npm test runs all eleven (270 tests).

Known limitations

  • No UserPromptSubmit analog in OpenCode. SDLC BASELINE moves to AGENTS.md (loaded once per session) instead of repeating per prompt.
  • Phase A only. Backend matrix proof (Phase B) and hardware scout (Phase C) deferred. The wizard installs and runs against any OpenCode backend; we just haven't yet measured SDLC-compliance scores across backends statistically.
  • No upstream-sync workflow yet. Updates from the parent claude-sdlc-wizard are manual. Future releases will mirror the Codex sibling's .github/workflows/upstream-sync.yml pattern.

License

MIT