npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

quality-governed

v0.1.1

Published

A small CLI that initializes a quality-governed product delivery workflow in any software project.

Readme

quality-governed

quality-governed is a small npm package that adds a simple quality-governed workflow to any software project.

It is built for:

  • Product Managers
  • QA and Quality Engineers
  • teams using AI-assisted delivery

It helps teams keep product planning, quality planning, implementation, testing, and bug review aligned around a shared set of documents.

What this package does

When you run qgd init, the package creates a ready-to-use workflow inside your current project.

That workflow gives your team:

  • a project-level product context
  • a Project Quality Canvas
  • feature SPEC-Lite files
  • feature Quality Overlay files
  • reusable AI agents and prompts

The goal is simple: move fast without losing quality direction.

Why this exists

Many teams now use AI to help with planning, coding, and testing. That is useful, but it also creates a common problem:

  • Product says what to build in one place
  • QA thinks about risk in another place
  • developers work from partial context
  • AI tools fill in missing details on their own

That usually leads to:

  • invented requirements
  • silent scope expansion
  • weak regression coverage
  • bugs being judged only by symptoms instead of business impact

quality-governed gives teams one lightweight structure so everyone, including AI agents, works from the same approved references.

What makes this workflow different

This package separates ownership clearly:

  • Product owns the problem framing.
  • Quality owns the strategic quality direction.
  • implementation and testing must follow both.

This means:

  • the feature SPEC-Lite is not a test plan
  • the Quality Overlay is not a product requirement document
  • the Project Quality Canvas is the highest-level quality reference for the project

Who owns what

Use this default ownership model:

  • Product owns docs/product/project.spec-lite.md
  • Product owns feature SPEC-Lite files in docs/specs/
  • Quality owns docs/quality/project-quality-canvas.md
  • Quality owns feature overlay files in docs/quality/overlays/
  • Human reviewers approve important artifacts before they become the source of truth

The core workflow in one view

  1. Product defines the project context.
  2. Quality creates the Project Quality Canvas.
  3. Product writes a feature SPEC-Lite for a feature.
  4. Quality creates the feature Quality Overlay.
  5. Developers and AI implementation agents build only within those references.
  6. Testers and AI testing agents validate using those references.
  7. Bugs and reviews are evaluated against strategic quality, not only local code changes.

Installation

You have two simple ways to use the package.

Option 1: Use it once with npx

This is the easiest option for most people.

npx quality-governed init

Use this when:

  • you want to try the package quickly
  • you do not want a global install
  • you are setting it up in a single project

Option 2: Install globally

npm install -g quality-governed
qgd init

Use this when:

  • you will set up this workflow in multiple repositories
  • you want the short qgd command available on your machine

The CLI commands

qgd init
qgd help
qgd new-spec "Checkout reliability"
qgd new-overlay "Checkout reliability"
qgd sync-templates
qgd doctor

What each command does:

  • qgd init: creates the workflow structure in the current project
  • qgd help: shows command usage
  • qgd new-spec "<name>": creates a new feature SPEC-Lite file
  • qgd new-overlay "<name>": creates a new feature Quality Overlay file
  • qgd sync-templates: refreshes package-managed instruction files such as .github/agents/, .github/prompts/, .github/copilot-instructions.md, and README.qgd.md without touching product or quality docs
  • qgd doctor: checks whether key workflow files exist

The safest way to start

If you are not technical, follow these steps exactly:

  1. Open a terminal in your project folder.
  2. Run npx quality-governed init
  3. Open the new docs/ and .github/ folders.
  4. Ask Product to fill in the project product documents.
  5. Ask Quality to draft the Project Quality Canvas.
  6. Before building a feature, create a feature SPEC-Lite.
  7. After that, create a feature Quality Overlay.
  8. Only then start implementation and testing.

What qgd init creates

docs/
  product/
    lean-canvas.md
    project.spec-lite.md

  quality/
    project-quality-canvas.md
    overlays/
      .gitkeep

  specs/
    .gitkeep

.github/
  copilot-instructions.md
  agents/
    product-owner.agent.md
    create-project-quality-canvas.agent.md
    update-project-quality-canvas.agent.md
    quality-planning.agent.md
    implementation.agent.md
    e2e-testing.agent.md
    evaluate-bug-against-canvas.agent.md
    review-change-against-quality.agent.md

  prompts/
    create-feature-spec-lite.prompt.md
    create-project-quality-canvas.prompt.md
    update-project-quality-canvas.prompt.md
    create-feature-quality-overlay.prompt.md
    generate-e2e-tests-from-quality.prompt.md
    evaluate-bug-against-canvas.prompt.md
    review-change-against-quality.prompt.md

README.qgd.md

Important behavior

qgd init is designed to be safe in real repositories.

It will:

  • create missing directories
  • create missing files
  • never overwrite an existing file
  • print whether each file was created or skipped
  • work safely if you run it again later

This matters because teams often start using a workflow gradually. You can run init again after some files already exist and it will not destroy your work.

The required reading order before work starts

Before implementation, test creation, code review, or bug review, read these in order:

  1. docs/quality/project-quality-canvas.md
  2. the relevant feature SPEC-Lite in docs/specs/
  3. the relevant feature Quality Overlay in docs/quality/overlays/

That order is intentional.

The Project Quality Canvas provides the strategic quality direction.

The feature SPEC-Lite explains the product problem and scope.

The feature Quality Overlay translates project-level quality direction into feature-level test and risk focus.

Step-by-step example for a non-technical team

Here is a concrete example using a fictional product team.

Example situation

Your team has a web app. Users are abandoning checkout because the payment step sometimes fails and the team cannot tell whether the problem is UI confusion, validation issues, or an integration problem.

The team wants to improve checkout reliability.

Step 1: Initialize the workflow

Run:

npx quality-governed init

Result:

  • the project gets the workflow folders
  • Product gets a place to describe the product
  • Quality gets a place to define the quality strategy
  • AI tools get reusable governance instructions

Step 2: Product fills in the project-level product files

Product opens:

  • docs/product/lean-canvas.md
  • docs/product/project.spec-lite.md

Product writes the high-level project context, for example:

  • who the users are
  • what business problem the product solves
  • what success looks like
  • what constraints are non-negotiable

At this stage, the team is not yet describing test scenarios. The focus is product intent.

Step 3: Quality drafts the Project Quality Canvas

Quality opens:

  • docs/quality/project-quality-canvas.md

Quality uses the product files to document:

  • the most important product areas
  • strategic risks
  • core quality scenarios
  • non-negotiable quality rules
  • what future changes are likely to matter
  • what testing implications follow from all of that

Example thinking:

  • checkout is revenue-critical
  • payment confirmation accuracy is non-negotiable
  • order duplication risk is severe
  • degraded third-party behavior must be tested, not ignored

Now the team has project-level quality direction.

Step 4: Product creates a feature SPEC-Lite

When the team starts the checkout improvement feature, run:

qgd new-spec "Checkout reliability"

This creates a file like:

docs/specs/checkout-reliability.spec-lite.md

Product fills in:

  • the problem
  • the feature scope
  • constraints
  • out-of-scope
  • success signal
  • kill condition

Example:

  • Problem: customers drop out when payment feedback is unclear
  • Scope: improve error messaging and retry clarity during checkout
  • Out-of-scope: redesigning the full checkout experience
  • Success signal: fewer abandoned payments and fewer support tickets

Step 5: Quality creates the feature Quality Overlay

Run:

qgd new-overlay "Checkout reliability"

This creates:

docs/quality/overlays/checkout-reliability.quality-overlay.md

Quality uses the approved Project Quality Canvas and the approved feature SPEC-Lite to define:

  • parent quality alignment
  • relevant risks
  • critical scenarios to validate
  • quality priorities
  • regression focus
  • watchouts

Example:

  • risk: duplicate charge under retry conditions
  • critical scenario: user retries after timeout but payment only succeeds once
  • regression focus: cart persistence, confirmation state, payment status messaging
  • watchout: a UI improvement must not hide backend failures

Step 6: Implementation starts

Before coding, the developer or AI implementation agent reads:

  1. docs/quality/project-quality-canvas.md
  2. docs/specs/checkout-reliability.spec-lite.md
  3. docs/quality/overlays/checkout-reliability.quality-overlay.md

This prevents common mistakes like:

  • adding extra scope that nobody approved
  • solving the wrong problem
  • optimizing the UI while ignoring critical quality risks

Step 7: Testing starts

Before writing tests, the tester or AI testing agent reads the same three files.

This is important because tests should not be based only on the code diff.

In the example, the test plan should cover:

  • user-visible payment failure feedback
  • retry behavior
  • duplicate submission risk
  • order confirmation correctness
  • regression around checkout state persistence

Step 8: Bugs are reviewed against quality context

Later, a bug appears:

"Users sometimes see a timeout message after payment, but the order still completes."

Without quality context, someone might call this a minor UI issue.

With the Project Quality Canvas and Quality Overlay, the team can see it may be high impact because:

  • checkout is a key feature
  • confirmation accuracy is non-negotiable
  • this bug could cause duplicate attempts, support cost, and trust damage

That is exactly the kind of governance this workflow is meant to preserve.

How to use the generated files in daily work

Product team

Use these files regularly:

  • docs/product/lean-canvas.md
  • docs/product/project.spec-lite.md
  • docs/specs/*.spec-lite.md

Product should use them to define:

  • the problem
  • the intended scope
  • constraints
  • what success means
  • what is explicitly out of scope

Quality team

Use these files regularly:

  • docs/quality/project-quality-canvas.md
  • docs/quality/overlays/*.quality-overlay.md

Quality should use them to define:

  • strategic risk
  • core scenarios
  • non-negotiable quality rules
  • regression focus
  • feature-specific watchouts

Developers

Developers should treat the quality files as required context, not optional notes.

That means:

  • do not implement from ticket text alone
  • do not use code diff alone as the basis for testing
  • do not assume local convenience is more important than approved quality direction

AI agents

The .github/agents/ files are the main governance layer.

Use them when you want an AI system to act in a clearly defined role such as:

  • turning raw product input into a feature SPEC-Lite
  • drafting the Project Quality Canvas
  • generating a feature Quality Overlay
  • implementing only within approved references
  • generating E2E scenarios
  • evaluating bugs using project-level quality context
  • reviewing changes against strategic quality expectations

The .github/prompts/ files are shorter reusable helpers for common actions.

The generated E2E testing agent is opinionated about browser-based testing:

  • if the host AI environment exposes Playwright MCP, the agent should use it first to inspect the running product
  • the agent should turn those observations plus the approved quality references into a test plan and Playwright-ready test cases
  • if environment access, credentials, or quality references are missing, the agent should report the blocker instead of guessing

Recommended team flow for every feature

Use this order each time:

  1. Product creates or updates the feature SPEC-Lite.
  2. Quality reviews it.
  3. Quality creates or updates the feature Quality Overlay.
  4. Product and Quality approve the references.
  5. Implementation starts.
  6. Testing starts.
  7. Review and bug evaluation refer back to the same documents.

If you skip steps 1 to 4, the rest of the workflow becomes much weaker.

Example commands you can copy

Initialize in the current project:

npx quality-governed init

Create a feature spec:

qgd new-spec "Checkout reliability"

Create a feature overlay:

qgd new-overlay "Checkout reliability"

Refresh the package-managed templates in an existing repository:

qgd sync-templates

Check that the core files exist:

qgd doctor

Show help:

qgd help

What qgd doctor is for

qgd doctor is a simple safety check.

It verifies that the key workflow files exist:

  • docs/product/lean-canvas.md
  • docs/product/project.spec-lite.md
  • docs/quality/project-quality-canvas.md
  • .github/copilot-instructions.md
  • README.qgd.md

Use it when:

  • you are not sure whether the workflow was initialized
  • someone deleted or moved files
  • you want a quick check in a fresh clone

How existing users get template updates

Installing a newer package version does not overwrite files that were already created by qgd init.

Use:

qgd sync-templates

This updates only the package-managed instruction files:

  • .github/agents/
  • .github/prompts/
  • .github/copilot-instructions.md
  • README.qgd.md

It does not overwrite product or quality artifacts such as:

  • docs/product/lean-canvas.md
  • docs/product/project.spec-lite.md
  • docs/quality/project-quality-canvas.md
  • docs/specs/*
  • docs/quality/overlays/*

How to test this package locally

From this package repository:

node ./bin/qgd.js help
node ./bin/qgd.js init
node ./bin/qgd.js init
node ./bin/qgd.js new-spec "Sample feature"
node ./bin/qgd.js new-overlay "Sample feature"
node ./bin/qgd.js sync-templates
node ./bin/qgd.js doctor
npm test
env npm_config_cache=/tmp/qgd-npm-cache npm pack --dry-run

To test the packed tarball in another repository:

npx /path/to/quality-governed-0.1.1.tgz init

How to publish to npm

When you are ready to publish:

npm login
npm publish --access public

What this package intentionally does not do

This is a small v1 package. It does not:

  • inspect or parse your codebase
  • connect to remote APIs
  • implement approvals in software
  • store project state in a database
  • enforce a company-specific process
  • depend on heavy frameworks

That is intentional. The package is meant to stay small, readable, and safe to adopt in almost any project.

Summary

If you want one simple rule to remember, use this:

Product defines what problem matters.

Quality defines what quality must mean for that product.

Implementation and testing must follow both.

That is the purpose of quality-governed.