@bstockwelldev/prompt-rubric
v0.1.0
Published
Static prompt quality rubric (D1–D7), Zod schemas, inventory, heuristic scoring, and Markdown reports — no LLM calls
Maintainers
Readme
@bstockwelldev/prompt-rubric
Static prompt artifact quality model (D1–D7), Zod validation, inventory / path maps, heuristic score suggestions from review text, Markdown rendering, and check for CI — no LLM calls in core flows. An optional judge command calls an OpenAI-compatible HTTP API when you opt in (API key + explicit run).
Install
pnpm add -D @bstockwelldev/prompt-rubricRequires Node.js ≥ 20.
Dimensions (short)
| ID | JSON key | Name |
| -- | -------- | ---- |
| D1 | metadata | Artifact identity |
| D2 | variables_trust | Variables & trust |
| D3 | structure_delimiters | Structure / delimiters |
| D4 | output_contract | Output contract |
| D5 | consistency_grounding | Consistency / grounding |
| D6 | fixtures_tests | Fixtures / tests |
| D7 | safety_guardrails | Safety / guardrails |
Scale: 0 fail, 1 partial, 2 pass, null = n/a (omitted from mean).
Bands: poor < 1.0, fair ≥ 1.0 and < 1.5, good ≥ 1.5.
Full definitions: RUBRIC.md (canonical).
CLI
pnpm exec prompt-rubric --help
pnpm exec prompt-rubric inventory --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric path-map --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric score-suggest --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric score-write --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric render-md --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric check --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric reviews-sync --config ./prompt-rubric.config.mjs
pnpm exec prompt-rubric reviews-sync --config ./prompt-rubric.config.mjs --merge-static
pnpm exec prompt-rubric llm-batch-pack --config ./prompt-rubric.config.mjs --output ./pack.md
pnpm exec prompt-rubric judge --config ./prompt-rubric.config.mjs --dry-run --limit 2reviews-sync— Alignsllm-prompts-matrix-reviews.jsonwith current inventory: keeps existingfindings/recommendations, adds stub rows for new prompts.--merge-staticappends synthetic static findings (heuristic file scan; option C, not a full policy linter).--dry-runprints row count only.llm-batch-pack— Emits one Markdown file for copy-paste into an external LLM (option D); no API keys. Merge model JSON back into reviews via PR.judge— Opt-in rubric review via OpenAI-compatiblePOST …/chat/completions(fetchonly; no SDK). SetOPENAI_API_KEY(or override env name viajudge.apiKeyEnvin config). Env overrides:OPENAI_BASE_URL,PROMPT_RUBRIC_JUDGE_MODEL. Flags:--dry-run(no network),--limit N,--repo SLUG,--output FILE(JSON),--merge-reviews→ merges model output intofindingsinside<!-- llm-judge ISO -->…<!-- /llm-judge -->(human text outside that block is preserved). Default rubric text: this package’s RUBRIC.md orjudge.rubricPathrelative tocwd. Costs tokens; output is non-deterministic — review in PR before trusting.
Config
Export a default object (or async function) from prompt-rubric.config.mjs:
import path from "node:path";
import { fileURLToPath } from "node:url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
/** @type {import('@bstockwelldev/prompt-rubric').PromptRubricUserConfig} */
export default {
cwd: __dirname,
preset: "tabletop-like",
secondaryRoots: {
boardGameSim: path.resolve(__dirname, "..", "board-game-sim-ai"),
},
paths: {
reviews: "docs/llm-prompts-matrix-reviews.json",
scores: "docs/llm-prompts-quality-scores.json",
scoresMarkdown: "docs/llm-prompts-quality-scores.md",
},
render: {
hubLinks: {
rubric: "./prompt-quality-rubric.md",
matrix: "./llm-prompts-matrix.md",
scoresJson: "./llm-prompts-quality-scores.json",
reviewsJson: "./llm-prompts-matrix-reviews.json",
promptManagement: "./guides/prompt-management.md",
},
},
};Presets:
tabletop-like— discoverssrc/ai/prompts/*.prompt.ts,src/prompts/registry.json, smart-importpromptConfig.tsquestion steps.board-game-sim-like— discoversapps/web/features/board-game-sim/data/prompts/*.tsexports.
Override repo slugs with repoSlugs: { tabletop: 'tabletop-studio', boardGameSim: 'board-game-sim-ai' }.
Programmatic API
import {
qualityScoresDocumentSchema,
computeMeanAndBand,
DEFAULT_RUBRIC_VERSION,
DIMENSION_KEYS,
} from "@bstockwelldev/prompt-rubric";Non-goals
- No required OpenAI/Anthropic SDK;
judgeusesfetchonly. - Heuristic
score-suggestis non-authoritative; usecheckfor schema + alignment.
License
MIT — see LICENSE.
