quality-governed
v0.1.1
Published
A small CLI that initializes a quality-governed product delivery workflow in any software project.
Maintainers
Readme
quality-governed
quality-governed is a small npm package that adds a simple quality-governed workflow to any software project.
It is built for:
- Product Managers
- QA and Quality Engineers
- teams using AI-assisted delivery
It helps teams keep product planning, quality planning, implementation, testing, and bug review aligned around a shared set of documents.
What this package does
When you run qgd init, the package creates a ready-to-use workflow inside your current project.
That workflow gives your team:
- a project-level product context
- a Project Quality Canvas
- feature SPEC-Lite files
- feature Quality Overlay files
- reusable AI agents and prompts
The goal is simple: move fast without losing quality direction.
Why this exists
Many teams now use AI to help with planning, coding, and testing. That is useful, but it also creates a common problem:
- Product says what to build in one place
- QA thinks about risk in another place
- developers work from partial context
- AI tools fill in missing details on their own
That usually leads to:
- invented requirements
- silent scope expansion
- weak regression coverage
- bugs being judged only by symptoms instead of business impact
quality-governed gives teams one lightweight structure so everyone, including AI agents, works from the same approved references.
What makes this workflow different
This package separates ownership clearly:
- Product owns the problem framing.
- Quality owns the strategic quality direction.
- implementation and testing must follow both.
This means:
- the feature SPEC-Lite is not a test plan
- the Quality Overlay is not a product requirement document
- the Project Quality Canvas is the highest-level quality reference for the project
Who owns what
Use this default ownership model:
- Product owns
docs/product/project.spec-lite.md - Product owns feature SPEC-Lite files in
docs/specs/ - Quality owns
docs/quality/project-quality-canvas.md - Quality owns feature overlay files in
docs/quality/overlays/ - Human reviewers approve important artifacts before they become the source of truth
The core workflow in one view
- Product defines the project context.
- Quality creates the Project Quality Canvas.
- Product writes a feature SPEC-Lite for a feature.
- Quality creates the feature Quality Overlay.
- Developers and AI implementation agents build only within those references.
- Testers and AI testing agents validate using those references.
- Bugs and reviews are evaluated against strategic quality, not only local code changes.
Installation
You have two simple ways to use the package.
Option 1: Use it once with npx
This is the easiest option for most people.
npx quality-governed initUse this when:
- you want to try the package quickly
- you do not want a global install
- you are setting it up in a single project
Option 2: Install globally
npm install -g quality-governed
qgd initUse this when:
- you will set up this workflow in multiple repositories
- you want the short
qgdcommand available on your machine
The CLI commands
qgd init
qgd help
qgd new-spec "Checkout reliability"
qgd new-overlay "Checkout reliability"
qgd sync-templates
qgd doctorWhat each command does:
qgd init: creates the workflow structure in the current projectqgd help: shows command usageqgd new-spec "<name>": creates a new feature SPEC-Lite fileqgd new-overlay "<name>": creates a new feature Quality Overlay fileqgd sync-templates: refreshes package-managed instruction files such as.github/agents/,.github/prompts/,.github/copilot-instructions.md, andREADME.qgd.mdwithout touching product or quality docsqgd doctor: checks whether key workflow files exist
The safest way to start
If you are not technical, follow these steps exactly:
- Open a terminal in your project folder.
- Run
npx quality-governed init - Open the new
docs/and.github/folders. - Ask Product to fill in the project product documents.
- Ask Quality to draft the Project Quality Canvas.
- Before building a feature, create a feature SPEC-Lite.
- After that, create a feature Quality Overlay.
- Only then start implementation and testing.
What qgd init creates
docs/
product/
lean-canvas.md
project.spec-lite.md
quality/
project-quality-canvas.md
overlays/
.gitkeep
specs/
.gitkeep
.github/
copilot-instructions.md
agents/
product-owner.agent.md
create-project-quality-canvas.agent.md
update-project-quality-canvas.agent.md
quality-planning.agent.md
implementation.agent.md
e2e-testing.agent.md
evaluate-bug-against-canvas.agent.md
review-change-against-quality.agent.md
prompts/
create-feature-spec-lite.prompt.md
create-project-quality-canvas.prompt.md
update-project-quality-canvas.prompt.md
create-feature-quality-overlay.prompt.md
generate-e2e-tests-from-quality.prompt.md
evaluate-bug-against-canvas.prompt.md
review-change-against-quality.prompt.md
README.qgd.mdImportant behavior
qgd init is designed to be safe in real repositories.
It will:
- create missing directories
- create missing files
- never overwrite an existing file
- print whether each file was
createdorskipped - work safely if you run it again later
This matters because teams often start using a workflow gradually. You can run init again after some files already exist and it will not destroy your work.
The required reading order before work starts
Before implementation, test creation, code review, or bug review, read these in order:
docs/quality/project-quality-canvas.md- the relevant feature SPEC-Lite in
docs/specs/ - the relevant feature Quality Overlay in
docs/quality/overlays/
That order is intentional.
The Project Quality Canvas provides the strategic quality direction.
The feature SPEC-Lite explains the product problem and scope.
The feature Quality Overlay translates project-level quality direction into feature-level test and risk focus.
Step-by-step example for a non-technical team
Here is a concrete example using a fictional product team.
Example situation
Your team has a web app. Users are abandoning checkout because the payment step sometimes fails and the team cannot tell whether the problem is UI confusion, validation issues, or an integration problem.
The team wants to improve checkout reliability.
Step 1: Initialize the workflow
Run:
npx quality-governed initResult:
- the project gets the workflow folders
- Product gets a place to describe the product
- Quality gets a place to define the quality strategy
- AI tools get reusable governance instructions
Step 2: Product fills in the project-level product files
Product opens:
docs/product/lean-canvas.mddocs/product/project.spec-lite.md
Product writes the high-level project context, for example:
- who the users are
- what business problem the product solves
- what success looks like
- what constraints are non-negotiable
At this stage, the team is not yet describing test scenarios. The focus is product intent.
Step 3: Quality drafts the Project Quality Canvas
Quality opens:
docs/quality/project-quality-canvas.md
Quality uses the product files to document:
- the most important product areas
- strategic risks
- core quality scenarios
- non-negotiable quality rules
- what future changes are likely to matter
- what testing implications follow from all of that
Example thinking:
- checkout is revenue-critical
- payment confirmation accuracy is non-negotiable
- order duplication risk is severe
- degraded third-party behavior must be tested, not ignored
Now the team has project-level quality direction.
Step 4: Product creates a feature SPEC-Lite
When the team starts the checkout improvement feature, run:
qgd new-spec "Checkout reliability"This creates a file like:
docs/specs/checkout-reliability.spec-lite.md
Product fills in:
- the problem
- the feature scope
- constraints
- out-of-scope
- success signal
- kill condition
Example:
- Problem: customers drop out when payment feedback is unclear
- Scope: improve error messaging and retry clarity during checkout
- Out-of-scope: redesigning the full checkout experience
- Success signal: fewer abandoned payments and fewer support tickets
Step 5: Quality creates the feature Quality Overlay
Run:
qgd new-overlay "Checkout reliability"This creates:
docs/quality/overlays/checkout-reliability.quality-overlay.md
Quality uses the approved Project Quality Canvas and the approved feature SPEC-Lite to define:
- parent quality alignment
- relevant risks
- critical scenarios to validate
- quality priorities
- regression focus
- watchouts
Example:
- risk: duplicate charge under retry conditions
- critical scenario: user retries after timeout but payment only succeeds once
- regression focus: cart persistence, confirmation state, payment status messaging
- watchout: a UI improvement must not hide backend failures
Step 6: Implementation starts
Before coding, the developer or AI implementation agent reads:
docs/quality/project-quality-canvas.mddocs/specs/checkout-reliability.spec-lite.mddocs/quality/overlays/checkout-reliability.quality-overlay.md
This prevents common mistakes like:
- adding extra scope that nobody approved
- solving the wrong problem
- optimizing the UI while ignoring critical quality risks
Step 7: Testing starts
Before writing tests, the tester or AI testing agent reads the same three files.
This is important because tests should not be based only on the code diff.
In the example, the test plan should cover:
- user-visible payment failure feedback
- retry behavior
- duplicate submission risk
- order confirmation correctness
- regression around checkout state persistence
Step 8: Bugs are reviewed against quality context
Later, a bug appears:
"Users sometimes see a timeout message after payment, but the order still completes."
Without quality context, someone might call this a minor UI issue.
With the Project Quality Canvas and Quality Overlay, the team can see it may be high impact because:
- checkout is a key feature
- confirmation accuracy is non-negotiable
- this bug could cause duplicate attempts, support cost, and trust damage
That is exactly the kind of governance this workflow is meant to preserve.
How to use the generated files in daily work
Product team
Use these files regularly:
docs/product/lean-canvas.mddocs/product/project.spec-lite.mddocs/specs/*.spec-lite.md
Product should use them to define:
- the problem
- the intended scope
- constraints
- what success means
- what is explicitly out of scope
Quality team
Use these files regularly:
docs/quality/project-quality-canvas.mddocs/quality/overlays/*.quality-overlay.md
Quality should use them to define:
- strategic risk
- core scenarios
- non-negotiable quality rules
- regression focus
- feature-specific watchouts
Developers
Developers should treat the quality files as required context, not optional notes.
That means:
- do not implement from ticket text alone
- do not use code diff alone as the basis for testing
- do not assume local convenience is more important than approved quality direction
AI agents
The .github/agents/ files are the main governance layer.
Use them when you want an AI system to act in a clearly defined role such as:
- turning raw product input into a feature SPEC-Lite
- drafting the Project Quality Canvas
- generating a feature Quality Overlay
- implementing only within approved references
- generating E2E scenarios
- evaluating bugs using project-level quality context
- reviewing changes against strategic quality expectations
The .github/prompts/ files are shorter reusable helpers for common actions.
The generated E2E testing agent is opinionated about browser-based testing:
- if the host AI environment exposes Playwright MCP, the agent should use it first to inspect the running product
- the agent should turn those observations plus the approved quality references into a test plan and Playwright-ready test cases
- if environment access, credentials, or quality references are missing, the agent should report the blocker instead of guessing
Recommended team flow for every feature
Use this order each time:
- Product creates or updates the feature SPEC-Lite.
- Quality reviews it.
- Quality creates or updates the feature Quality Overlay.
- Product and Quality approve the references.
- Implementation starts.
- Testing starts.
- Review and bug evaluation refer back to the same documents.
If you skip steps 1 to 4, the rest of the workflow becomes much weaker.
Example commands you can copy
Initialize in the current project:
npx quality-governed initCreate a feature spec:
qgd new-spec "Checkout reliability"Create a feature overlay:
qgd new-overlay "Checkout reliability"Refresh the package-managed templates in an existing repository:
qgd sync-templatesCheck that the core files exist:
qgd doctorShow help:
qgd helpWhat qgd doctor is for
qgd doctor is a simple safety check.
It verifies that the key workflow files exist:
docs/product/lean-canvas.mddocs/product/project.spec-lite.mddocs/quality/project-quality-canvas.md.github/copilot-instructions.mdREADME.qgd.md
Use it when:
- you are not sure whether the workflow was initialized
- someone deleted or moved files
- you want a quick check in a fresh clone
How existing users get template updates
Installing a newer package version does not overwrite files that were already created by qgd init.
Use:
qgd sync-templatesThis updates only the package-managed instruction files:
.github/agents/.github/prompts/.github/copilot-instructions.mdREADME.qgd.md
It does not overwrite product or quality artifacts such as:
docs/product/lean-canvas.mddocs/product/project.spec-lite.mddocs/quality/project-quality-canvas.mddocs/specs/*docs/quality/overlays/*
How to test this package locally
From this package repository:
node ./bin/qgd.js help
node ./bin/qgd.js init
node ./bin/qgd.js init
node ./bin/qgd.js new-spec "Sample feature"
node ./bin/qgd.js new-overlay "Sample feature"
node ./bin/qgd.js sync-templates
node ./bin/qgd.js doctor
npm test
env npm_config_cache=/tmp/qgd-npm-cache npm pack --dry-runTo test the packed tarball in another repository:
npx /path/to/quality-governed-0.1.1.tgz initHow to publish to npm
When you are ready to publish:
npm login
npm publish --access publicWhat this package intentionally does not do
This is a small v1 package. It does not:
- inspect or parse your codebase
- connect to remote APIs
- implement approvals in software
- store project state in a database
- enforce a company-specific process
- depend on heavy frameworks
That is intentional. The package is meant to stay small, readable, and safe to adopt in almost any project.
Summary
If you want one simple rule to remember, use this:
Product defines what problem matters.
Quality defines what quality must mean for that product.
Implementation and testing must follow both.
That is the purpose of quality-governed.
