ai-publish
v3.1.1
Published
AI-assisted release authoring tool that generates a changelog, release notes, and a next version number (evidence-backed from git diff).
Downloads
959
Maintainers
Readme
ai-publish — Ready to let AI write your release notes, changelog, and compute your next semver bump?
ai-publish is built to do that without letting the model invent changes: the only authority for “what changed” is git diff <base>..HEAD. The system may still use additional bounded context (e.g. file snippets, searches, and optional commit-message metadata) to understand more.
Primary workflow: prepublish → postpublish
Most users should use ai-publish as a two-step release flow:
prepublishprepares release outputs (and the next version) locally.- You build/package your artifacts.
postpublishpublishes, then finalizes git state (commit + tag + push).
Install as a dev dependency (recommended):
npm install --save-dev ai-publishRequired: configure an LLM provider
ai-publish requires an LLM provider for prepublish, changelog, and release-notes.
Before running the CLI, choose a provider (openai or azure) and set the required environment variables (see the “LLM providers” section below).
- OpenAI: set
OPENAI_API_KEYandOPENAI_MODEL - Azure OpenAI: set
AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY, andAZURE_OPENAI_DEPLOYMENT
In the repo you want to release:
npx ai-publish prepublish --llm <openai|azure>
# build/package step depends on your ecosystem
npx ai-publish postpublishWhy there must be a pre + post publish
Publishing is the part most likely to fail or require interaction (credentials, OTP/2FA, network, registry errors). ai-publish splits the flow so your git history and tags stay correct:
prepublishcan safely generate outputs and computev<next>without creating a “release commit” or tag.postpublishruns the actual publish step first, and only after publish succeeds does it create the release commit and annotatedv<next>tag and push them.
If publishing fails, you do not end up with a pushed release tag that doesn’t correspond to a published artifact.
Git + tag behavior (what happens when)
prepublish:
- Requires a clean worktree.
- Refuses if
HEADis already tagged with av<semver>tag. - Writes release outputs to disk:
- changelog (default
CHANGELOG.md, overridable via--out) - release notes at
release-notes/v<next>.md - optional manifest version update (disabled via
--no-write)
- changelog (default
- Writes an intent file:
.ai-publish/prepublish.json. - Does not create a git commit.
- Does not create a git tag.
- Does not push anything.
postpublish:
- Requires
.ai-publish/prepublish.json(i.e., you must runprepublishfirst). - Runs the project-type publish step first.
- After publish succeeds, it:
- creates a release commit containing only the prepared release paths
- commit message:
chore(release): v<next>
- commit message:
- creates an annotated tag
v<next>pointing at that commit - pushes the current branch and the tag to the remote (default
origin)
- creates a release commit containing only the prepared release paths
- Refuses if your working tree has changes outside the release output paths recorded by
prepublish.
Recommended release flow
npm
npx ai-publish prepublish --llm <openai|azure>
npm run build
npx ai-publish postpublish.NET
npx ai-publish prepublish --project-type dotnet --manifest path/to/MyProject.csproj --llm <openai|azure>
dotnet pack -c Release
npx ai-publish postpublish --project-type dotnet --manifest path/to/MyProject.csprojRust
npx ai-publish prepublish --project-type rust --manifest Cargo.toml --llm <openai|azure>
cargo publish --dry-run
npx ai-publish postpublish --project-type rust --manifest Cargo.tomlPython
npx ai-publish prepublish --project-type python --manifest pyproject.toml --llm <openai|azure>
python -m build
npx ai-publish postpublish --project-type python --manifest pyproject.tomlGo
npx ai-publish prepublish --project-type go --manifest go.mod --llm <openai|azure>
# build/test as needed
npx ai-publish postpublish --project-type go --manifest go.modOne-off generation (without publishing)
If you only want to generate markdown (no publish step, no commit/tag/push), you can run the generators directly:
npx ai-publish changelog --llm openai
npx ai-publish release-notes --llm openaichangelogwritesCHANGELOG.mdby default.release-noteswrites torelease-notes/v<next>.mdby default when--outis omitted and you are not using an explicit--base(orrelease-notes/<tag>.mdifHEADis already tagged).
Quickstart (from source)
If you’re developing ai-publish itself:
npm install
npm run buildThen, from the target repo:
node /path/to/ai-publish/dist/cli.js changelog --llm openaiCore invariants
- Sole authority for what changed is
git diff <base>..HEAD. - The diff is indexed and queryable.
- Binary diffs are metadata-only.
- The full diff is never returned by APIs; callers must request bounded hunks by ID.
These rules are the point of the tool: they make output auditable and make prompt-injection style attacks much harder (because downstream analysis can only “see” bounded evidence).
Diff index storage (.ai-publish)
ai-publish persists a bounded diff index under .ai-publish/diff-index/<baseSha>..<headSha>/.
- The manifest is metadata-only (
manifest.json). - Each hunk is stored as a separate
.patchfile (bounded and possibly truncated).
Important: these hunk files are still derived from your repo’s diff and may contain sensitive information (secrets, credentials, proprietary code). The repo’s .gitignore should exclude .ai-publish/ (this repo does).
If you want the diff index stored elsewhere (for example in a temp directory, encrypted volume, or CI workspace scratch area), pass:
npx ai-publish changelog --llm <openai|azure> --index-root-dir <path>
npx ai-publish release-notes --llm <openai|azure> --index-root-dir <path>
npx ai-publish prepublish --llm <openai|azure> --index-root-dir <path>Practical defaults:
- CI runners: set
--index-root-dirto a workspace scratch directory that is guaranteed writable. - Local runs:
- Linux/macOS:
--index-root-dir /tmp/ai-publish - Windows (PowerShell):
--index-root-dir "$env:TEMP\ai-publish"
- Linux/macOS:
How it works (high level)
indexDiff()runsgit diff <base>..HEADwith rename detection and builds an index under.ai-publish/diff-index/<baseSha>..<headSha>/.- Each diff hunk is stored as its own file in
hunks/<hunkId>.patch. - The index manifest (
manifest.json) contains only metadata + hunk IDs (never full patch content). getDiffHunks({ hunkIds })returns only requested hunks, enforcing a total byte limit.
For changes that have no textual hunks (e.g. rename-only), ai-publish creates a metadata-only @@ meta @@ pseudo-hunk so downstream output can still attach explicit evidence.
This also applies to binary diffs and other hunkless changes: evidence is represented as metadata only.
How the LLM passes work (3-pass pipeline)
Both changelog and release-notes follow the same three-pass structure:
Pass 1: Mechanical (metadata → notes)
The model is given only deterministic, metadata-only inputs:
- the diff summary (file list, change types, basic stats)
- evidence nodes (file-level nodes with hunk IDs)
- resolved repo instructions (context-only configuration)
- deterministic “mechanical facts” (counts + a per-file index, still no patch text)
It outputs a list of “mechanical notes” — a compact intermediate representation of what changed.
Pass 2: Semantic (tool-gated, budgeted retrieval)
The model may request bounded additional context to interpret impact, via a restricted tool surface:
getDiffHunks(hunkIds)(only hunk IDs that exist in the evidence set are allowed)- bounded repo context (HEAD-only): file snippets, “snippet around”, file metadata
- bounded repo search: path search, file search, repo-wide text search, file listing
All tool outputs are budgeted globally (byte caps), and the pipeline refuses requests once a budget is exhausted.
Optional commit-message context for
base..HEADcan be included, but it is explicitly treated as untrusted and never as evidence.Pass 3: Editorial (structured output + guardrails)
- For
changelog, the model must output a structured changelog model (Keep a Changelog style) where every bullet references explicit evidence node IDs.- The pipeline repairs/dedupes bullets deterministically, conservatively fixes invalid/missing evidence references, and applies deterministic breaking-change heuristics.
- Coverage guardrail: if any evidence node is not referenced by at least one bullet, the pipeline injects an auto-generated bullet (e.g. “Updated .”) so the changelog covers the entire
base..HEADdiff. - The model is validated (no unknown evidence references), then rendered to markdown using the HEAD commit date as the release date.
- For
release-notes, the model outputs human-facing markdown plus a list of evidence node IDs supporting that markdown.- The pipeline refuses “markdown with zero evidence” (it will not implicitly attach all evidence).
- Rendering prefers a real
v<semver>tag at HEAD when available, to avoid emitting “Unreleased” for already-tagged releases.
- For
Code pointers:
- Pipelines:
src/pipeline/runChangelogPipeline.ts,src/pipeline/runReleaseNotesPipeline.ts - LLM contract:
src/llm/types.ts - Diff indexing + bounded hunk retrieval:
src/diff/indexDiff.ts,src/diff/getDiffHunks.ts - Evidence construction:
src/changelog/evidence.ts - Changelog validation + rendering:
src/changelog/validate.ts,src/changelog/renderKeepAChangelog.ts - Deterministic mechanical facts:
src/llm/deterministicFacts.ts
Version bump: deterministic recommendation + LLM justification
The next version recommendation is computed deterministically from the changelog model:
majorif there are anybreakingChangesminorif there are anyaddedentriespatchif there are anychanged/fixed/removedentriesnoneif the diff is internal-only
Then ai-publish computes nextVersion using semver rules (including prerelease handling when the previous version is a prerelease). The LLM is only used to produce a human-readable justification, and it is required to repeat the same nextVersion — if it disagrees, the pipeline fails.
Code pointers:
- Version bump pipeline:
src/pipeline/runVersionBumpPipeline.ts(andsrc/pipeline/runPrepublishPipeline.ts) - Bump type + semver calculation:
src/version/bump.ts - Tag-based base resolution:
src/version/resolveVersionBase.ts
Versioning (git tags)
ai-publish treats git tags of the form v<semver> as the source of truth for release versions.
- For
changelogandrelease-notes: if--baseis omitted, the diff base defaults to the most recent reachablev<semver>tag commit (otherwise the empty tree). - For
prepublishand version bumping: if no version tags exist, ai-publish inferspreviousVersionfrom the selected manifest (or you can set it explicitly via--previous-version), and it may infer a base commit from manifest history when possible.- If your repo has no tags and the manifest is already bumped to the next version, use
--previous-version-from-manifest-historyto infer the previous distinct version from the manifest's git history.
- If your repo has no tags and the manifest is already bumped to the next version, use
prepublishcomputes a predictedv<next>and prepares release outputs locally.postpublishcreates a local release commit and an annotated tagv<next>pointing at that commit after publish succeeds, then pushes the branch + tag.- Manifests (e.g.
package.json,.csproj) are updated to matchv<next>(unless--no-write).
CLI
LLM mode is required for changelog, release-notes, and prepublish: you must pass --llm openai or --llm azure.
postpublish does not use the LLM and does not accept --llm.
LLM providers are mentioned below; OpenAI is listed first.
ai-publish changelog [--base <commit>] [--out <path>] [--index-root-dir <path>] --llm <openai|azure> [--commit-context <none|snippet|full>] [--commit-context-bytes <n>] [--commit-context-commits <n>] [--debug]
ai-publish release-notes [--base <commit>] [--previous-version <semver>] [--out <path>] [--index-root-dir <path>] --llm <openai|azure> [--commit-context <none|snippet|full>] [--commit-context-bytes <n>] [--commit-context-commits <n>] [--debug]
ai-publish prepublish [--base <commit>] [--previous-version <semver>] [--previous-version-from-manifest-history] [--project-type <npm|dotnet|rust|python|go>] [--manifest <path>] [--package <path>] [--no-write] [--out <path>] [--index-root-dir <path>] --llm <openai|azure> [--debug]
ai-publish postpublish [--project-type <npm|dotnet|rust|python|go>] [--manifest <path>] [--publish-command <cmd>] [--skip-publish] [--debug]
ai-publish --helpPostpublish publish control:
--publish-command <cmd>: run your own publish step before commit/tag/push.--skip-publish: skip the built-in publish step entirely.
Outputs and defaults
changelog- Default output path:
CHANGELOG.md - Writes the changelog markdown, then prints a JSON summary (base resolution, tags, etc.).
- If the output file already exists, prepends the newly generated version entry at the top (full history).
- Special case:
## [Unreleased]is replaced (upsert) rather than duplicated. - Legacy
# Changelog (<base>..<head>)headers are migrated to a## [<version>]section when possible.
- Special case:
- Default output path:
release-notes- If
--outis provided, writes exactly there. - If
--outis not provided:- If
HEADis already taggedv<semver>, writesrelease-notes/<tag>.md. - Otherwise (most common, when
--baseis omitted), computes the next version tag and writesrelease-notes/v<next>.md. - If you pass an explicit
--baseandHEADis not tagged, the default output remainsRELEASE_NOTES.md.
- If
- Always prints a JSON summary.
- If
prepublish- Refuses to run if the git worktree is dirty.
- Refuses to run if
HEADis already tagged with a version tag. - If no version tags exist, infers
previousVersionfrom the selected manifest (or use--previous-version).- For
--project-type gowithout tags, you must pass--previous-version.
- For
- Writes:
- changelog (default
CHANGELOG.md, overridable via--out) - release notes under
release-notes/v<next>.md - optionally updates the selected manifest version (disabled via
--no-write)
- changelog (default
- Does not create a commit or tag (those are created by
postpublishafter publish succeeds). - Prints a JSON result to stdout (it does not print the markdown).
--package <path>is a backwards-compatible alias for npm manifests; it implies--project-type npm.
Changelog behavior:
- If the changelog output file already exists, prepublish prepends the newly generated version entry at the top (full history).
- Legacy
# Changelog (<base>..<head>)headers are migrated to a## [<version>]section when possible.
postpublish- Requires
.ai-publish/prepublish.json(i.e., you must runprepublishfirst). - Requires being on a branch (not detached
HEAD). - Runs a project-type-specific publish step.
- After publish succeeds, creates a release commit + annotated
v<next>tag, then pushes the branch + tag. - Prints a JSON result to stdout.
- Note:
--llmis not accepted for postpublish.
- Requires
Logging and tracing (pipelines)
ai-publish prints machine-readable JSON to stdout for several commands. To keep stdout parseable, all logs are written to stderr.
Environment variables:
AI_PUBLISH_LOG_LEVEL:silent|info|debug|trace(default:infofor CLI runs,silentfor programmatic usage)AI_PUBLISH_TRACE_TOOLS=1: logs which bounded semantic tools were called, along with request counts and budget usage (no full diff/snippet dumping)AI_PUBLISH_TRACE_LLM=1: logs LLM request/response metadata (provider + label + sizes)AI_PUBLISH_TRACE_LLM_OUTPUT: prints raw structured LLM outputs (truncated) to stderr (enabled by default for CLI runs; set to0to disable)
Recommended release flow
npm
ai-publish prepublish --llm <openai|azure>- Build your package.
ai-publish postpublish
.NET
ai-publish prepublish --project-type dotnet --manifest path/to/MyProject.csproj --llm <openai|azure>- Build.
ai-publish postpublish --project-type dotnet --manifest path/to/MyProject.csproj
postpublish publish steps by project type
npm: runsnpm publishdotnet: pushes already-built packages frombin/Releaseusingdotnet nuget push.- It only pushes the
.nupkgmatching thepredictedTagfromprepublish(to avoid accidentally re-publishing old packages left in the build output). - By default, it does not pass
--source, sodotnet nuget pushuses yournuget.config(e.g.defaultPushSource). - To override, set
AI_PUBLISH_NUGET_SOURCE(orNUGET_SOURCE). - Configure auth with
AI_PUBLISH_NUGET_API_KEY(orNUGET_API_KEY). - For Azure DevOps Artifacts, set
AI_PUBLISH_NUGET_SOURCEto your feed v3 URL and use a PAT as the API key.
- It only pushes the
rust: runscargo publishgo: no publish command (the “publish” is the pushed tag)python: runspython -m buildthenpython -m twine upload dist/*
Optional repo instructions (improves accuracy)
ai-publish can scan the target repo (the repo you are releasing) for hierarchical instruction files and feed them into changelog, release-notes, and prepublish generation as context-only configuration.
Supported instruction filenames:
AGENTS.mdcopilot-instructions.md.github/copilot-instructions.md
Resolution is hierarchical (repo root → directories containing changed files); when multiple instruction files define the same directive key, the nearest file wins.
One practical use: helping ai-publish identify what constitutes public API in repos that don’t follow the default heuristics (monorepos, unusual layouts, non-TypeScript projects).
Add one of these directives to an instruction file:
ai-publish.publicPathPrefixes: src/public, include, apiai-publish.publicFilePaths: src/entrypoint.tsai-publish.internalPathPrefixes: generated, vendor
These directives influence surface classification (public-api vs internal) and therefore breaking-change heuristics and prioritization, but they do not change the core invariant: only git diff <base>..HEAD is evidence of what changed.
Programmatic usage (TS/JS)
The same functionality is available as a library API with CLI-equivalent parameters.
Custom LLM clients
For programmatic use, you may optionally provide your own llmClient implementation (alternate providers, wrappers/instrumentation, caching, or network-free tests). When llmClient is provided, it is used instead of constructing the default client from environment variables.
import { generateChangelog, generateReleaseNotes } from "ai-publish"
await generateChangelog({
llm: "openai"
// llmClient: myCustomClient,
// base: "<sha>",
// outPath: "CHANGELOG.md",
// indexRootDir: "/tmp/ai-publish",
// cwd: process.cwd(),
})
await generateReleaseNotes({
llm: "openai"
// llmClient: myCustomClient,
// base: "<sha>",
// outPath: "RELEASE_NOTES.md",
// indexRootDir: "/tmp/ai-publish",
// cwd: process.cwd(),
})LLM providers
OpenAI
Set environment variables:
OPENAI_API_KEYOPENAI_MODEL(a chat model that supports JSON-schema structured outputs)OPENAI_BASE_URL(optional; defaulthttps://api.openai.com/v1)OPENAI_TIMEOUT_MS(optional)
Note: OpenAI mode uses Structured Outputs (JSON schema). Your selected model must support response_format: { type: "json_schema", ... } for Chat Completions.
Azure OpenAI
Set environment variables:
AZURE_OPENAI_ENDPOINT(e.g.https://<resource-name>.openai.azure.com)AZURE_OPENAI_API_KEYAZURE_OPENAI_DEPLOYMENT(your chat model deployment name)AZURE_OPENAI_API_VERSION(optional; default2024-08-01-preview)AZURE_OPENAI_TIMEOUT_MS(optional)
Note: LLM mode uses Structured Outputs (JSON schema) and requires Azure OpenAI API versions 2024-08-01-preview or later.
Testing
npm testruns network-free unit + integration tests.- End-to-end changelog and release notes generation are covered by integration tests that create temporary git repo fixtures and use a local stub LLM client so outputs are stable without network calls.
Local semantic acceptance (optional)
Additional integration tests can ask Azure OpenAI to judge whether the generated changelog/release notes accurately reflect the evidence.
- Opt-in and skipped by default (so CI remains deterministic and network-free).
- Local-only: skipped when
CIis set. - Run with
npm run test:llm-eval(requires the Azure env vars listed above). - Internally gated by
AI_PUBLISH_LLM_EVAL=1(the script sets it for you).
The evaluator uses structured JSON output with this schema:
{ "accepted": boolean, "reason": string | null }
Local Azure generation (optional)
An additional integration test can ask Azure OpenAI to generate changelog / release notes output end-to-end.
- Opt-in and skipped by default (so CI remains deterministic and network-free).
- Local-only: skipped when
CIis set. - Run with
npm run test:llm-generate(requires the Azure env vars listed above). - Internally gated by
AI_PUBLISH_LLM_GENERATE=1(the script sets it for you).
When to run LLM tests
If you change any of the following, run both npm run test:llm-eval and npm run test:llm-generate in addition to npm test:
src/llm/*(Azure/OpenAI clients)- LLM pipeline orchestration in
src/pipeline/* - Output schemas/contracts used by the LLM passes
Troubleshooting
Missing required flag: --llmchangelog,release-notes, andprepublishrequire--llm openaior--llm azure.
HEAD is already tagged ... Refusing to prepublish twice.prepublishis intentionally one-shot per version. MoveHEADforward or delete the tag if you’re intentionally retrying.
No user-facing changes detected (bumpType=none). Refusing to prepare a release.- ai-publish refuses to cut a release if the changelog model has no user-facing changes.
Missing .ai-publish/prepublish.json. Run prepublish first.postpublishrequires the intent file written byprepublish.
.NET postpublish requires --manifest <path/to.csproj>- Provide
--manifestfordotnetproject type.
- Provide
Missing NuGet API key...- Set
AI_PUBLISH_NUGET_API_KEY(orNUGET_API_KEY) before runningdotnet postpublish.
- Set
Semantic pass request: expected JSON but got: ...- This usually means the LLM provider returned extra text, multiple JSON objects, or truncated output due to output token limits.
- ai-publish runs the semantic “tool request” phase in small batches across multiple rounds; if you still see this intermittently, enable request/response tracing to diagnose provider behavior:
AI_PUBLISH_TRACE_LLM=1AI_PUBLISH_LOG_LEVEL=debug
- On Azure, ensure
AZURE_OPENAI_API_VERSIONis2024-08-01-previewor later (Structured Outputs).
