@waxmard/git-ai
v6.0.0
Published
LLM-powered git workflow tools — generate commit messages and PR titles using Claude, Gemini, or Codex
Maintainers
Readme
git-ai
LLM-powered git workflow tools. Generate commit messages and PR titles using explicit auth methods and model IDs from the CLI, Lazygit, and other git environments that expose normal Git state.
Install
npm install -g @waxmard/git-aiOr clone and symlink for local development:
make install # symlinks to ~/.local/bin and ~/.local/lib; edits are live
make uninstallPrerequisites
At least one auth method must be available:
| Auth Method | Runtime | Auth |
|-------------|---------|------|
| vertex-gemini | curl + python3 + gcloud | Google ADC / Vertex credentials |
| vertex-anthropic | curl + python3 + gcloud | Google ADC / Vertex credentials |
Vertex AI support is limited to Gemini (
vertex-gemini) and Anthropic (vertex-anthropic) model families. Other publishers available on Vertex (Meta Llama, Mistral, etc.) are not yet supported.
| gemini-api | Gemini CLI | GEMINI_API_KEY or system keychain |
| claude-code | Claude Code CLI | Claude Code CLI session |
| anthropic-api | curl + python3 | ANTHROPIC_API_KEY |
| codex | Codex CLI | Codex CLI session |
| openai-api | curl + python3 | OPENAI_API_KEY |
anthropic-api and openai-api require curl and python3, both standard on macOS and most Linux systems.
Gemini API auth
git-ai tries these in order until one succeeds:
GEMINI_API_KEYenvironment variable- System keychain — store the key as
gemini-api-key:- macOS:
security add-generic-password -s gemini-api-key -a "$USER" -w YOUR_KEY - GNOME / libsecret:
secret-tool store --label="Gemini API Key" service gemini-api-key - pass:
pass insert gemini-api-key - KDE Wallet:
kwallet-query kdewallet -w gemini-api-key
- macOS:
- Google Application Default Credentials (ADC):
gcloud auth application-default login- or set
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
Commands
Both git-ai and aigit work identically — use whichever you prefer.
commit
Generate a commit message from staged changes.
git-ai commit [auth-method] [model-id]- Reads
git diff --stagedand produces a Conventional Commits message - Includes a description body for non-trivial changes
- No default auth method on a fresh repo; choose one explicitly
- All auth methods default to a lightweight model when
model-idis omitted - Pass
lastas the provider to reuse the previously generated message
pr
Generate a PR title and body from the current branch.
git-ai pr [auth-method] [model-id] [--base <branch>] [--fresh] [--from-sha <commit>]
git-ai mr [...] # alias for pr- Reads the commit log and diff against the base branch
- Produces a Conventional Commits title + markdown body with a
### Test Plansection - Auto-detects the base branch from the remote default (falls back to
main) - Use
--baseto override (e.g.--base dev) - Saves the generated output per current-branch/base-branch pair under
.git/pr-cache/; subsequent runs with the same pair refine the previous result automatically - Use
--freshto ignore the saved output and regenerate from scratch - Use
--from-shato override the saved HEAD and regenerate only from commits after a specific prior generated commit - No default auth method on a fresh repo; choose one explicitly
- All auth methods default to a stronger model when
model-idis omitted
options
List every auth-method / model combo as a flat pipe-delimited list, LRU-sorted. Primary input for the fzf-based Lazygit integration; also useful for custom pickers.
git-ai options [commit|pr]- Emits one
provider:model|<label>line per selectable combo - For
commit, also emitslast|reuse saved messagewhen a saved message exists - Most-recent picks (from
.git/{tool}-choice-history) float to the top; remaining combos follow in default order git-ai commit <provider:model>andgit-ai pr <provider:model>accept the emitted value directly
providers / models
List available auth methods and models, ordered by last-used. Kept for scripting and as a fallback when options isn't a fit.
last is only a commit provider option; PR refinement reuses cached prior output automatically.
git-ai providers [commit|pr]
git-ai models <auth-method> [commit|pr]Python library
git-ai is also distributed as a Python package (waxmard-git-ai) so other tools can reuse the same commit-message and MR-description prompt assembly without shelling out.
pip install waxmard-git-ai
# or: uv add waxmard-git-aiThe Python package is provider-agnostic and ships with zero runtime dependencies: it owns prompt assembly, diff-stat derivation, fence-stripping, git helpers, and PR-cache management. It never calls an LLM. Consumers wire their own model call between build_*_prompt and parse_*_response. Bring your own Claude / Gemini / OpenAI / ADK / anything — sync or async.
Commit message (data-mode):
import git_ai
system, user = git_ai.build_commit_prompt(diff_text)
raw = my_llm(system, user) # your call: SDK, agent framework, REST, etc.
commit_msg = git_ai.parse_commit_response(raw)MR/PR description (data-mode — no local checkout, e.g. fetched from the GitHub/GitLab API):
import git_ai
log = git_ai.format_commit_log((c.title, c.message) for c in mr_commits)
system, user = git_ai.build_mr_prompt(
diff=diff_text,
commit_log=log,
existing_pr=current_pr_body or None,
)
raw = my_llm(system, user)
pr_text = git_ai.parse_mr_response(raw)
# Optional: render a compact ~ / + / - delta against the prior PR
delta = git_ai.render_pr_diff(current_pr_body, pr_text, color=False) or Nonediff_stat and release_context are optional — when omitted, the diff-stat is derived from the diff and a generic "no release tags found" context is used. Model selection, retries, auth, and error handling are the caller's responsibility (inside my_llm).
Repo-mode (reads staged diff / base..HEAD from a local checkout):
import git_ai
# Commit message from staged changes (auto-loads .git-ai-ignore)
diff = git_ai.get_staged_diff(".")
system, user = git_ai.build_commit_prompt(
diff, release_context=git_ai.get_release_context("."),
)
commit_msg = git_ai.parse_commit_response(my_llm(system, user))
# PR description with incremental cache reuse
ctx = git_ai.prepare_repo_pr_context(".", base_branch="main")
if ctx.no_changes:
pr_text = ctx.existing_pr # HEAD unchanged, reuse cached PR
else:
system, user = git_ai.build_mr_prompt(
diff=ctx.diff,
commit_log=ctx.commit_log,
diff_stat=ctx.diff_stat,
release_context=ctx.release_context,
existing_pr=ctx.existing_pr,
)
pr_text = git_ai.parse_mr_response(my_llm(system, user))
if ctx.current_branch:
git_ai.save_cached_pr(
git_ai.get_git_dir("."),
ctx.current_branch,
"main",
pr_text,
ctx.head_sha,
)prepare_repo_pr_context reuses .git/pr-cache/ automatically, sets no_changes=True when HEAD matches the cached SHA (so callers can skip the LLM entirely), and narrows the diff/commit_log to commits after the last generated HEAD when possible. Pass fresh=True to bypass the cache for one call, or previous_head_sha= to override the cached incremental base explicitly.
Data-mode is stateless. To get the same efficiency in remote consumers, persist the prior PR text + last generated head SHA yourself, fetch the incremental diff/log since that SHA from your SCM, and pass them to build_mr_prompt(diff=..., commit_log=..., existing_pr=...).
Async / agent-framework example — the prompt builders are pure, so anything goes inside the LLM call. Pass system and user to whatever your SDK expects (Anthropic system= + messages=[{"role": "user", ...}], OpenAI/Gemini message lists, ADK agent instruction + input, etc.):
import git_ai
from anthropic import AsyncAnthropic
client = AsyncAnthropic()
async def commit_msg(diff: str) -> str:
system, user = git_ai.build_commit_prompt(diff)
resp = await client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system=system,
messages=[{"role": "user", "content": user}],
)
return git_ai.parse_commit_response(resp.content[0].text)Excluding noisy files (.git-ai-ignore)
Lockfiles and other generated artifacts can dominate a diff and push it past the LLM provider's input cap. git-ai always excludes the following filenames from git diff --staged (commit) and git diff base...HEAD (pr):
package-lock.json yarn.lock pnpm-lock.yaml npm-shrinkwrap.json
Gemfile.lock Cargo.lock go.sum poetry.lock
uv.lock composer.lock Pipfile.lock pubspec.lock
mix.lock flake.lockDrop a .git-ai-ignore file at the repo root to add more patterns (one per line, # comments and blank lines ignored). Patterns are Git pathspec glob fragments that git-ai prefixes with **/, so generated/**/*.ts matches TypeScript files under any generated/ directory; leading / is not .gitignore root syntax. Lines starting with ! re-include a pattern, useful when you actually want to review a built-in default:
build/dist.js
generated/**/*.ts
# Re-include this lockfile when you want to review it
!package-lock.jsonIf the post-exclude diff is still over GIT_AI_MAX_DIFF_BYTES (default 900000, set 0 to disable), git-ai aborts with a "Largest changed files" hint pointing at what to ignore or unstage.
In the Python library, get_staged_diff, get_diff, and get_diff_stat auto-load .git-ai-ignore and apply built-in lockfile defaults when exclude_patterns is omitted. Pass exclude_patterns=[] to opt out of all filtering.
Narrowing the picker list
By default git-ai options enumerates every supported provider/model combo. Most users only have access to a couple. To restrict the picker to just the providers and models you actually use, drop a config file at $XDG_CONFIG_HOME/git-ai/options.conf (usually ~/.config/git-ai/options.conf):
[claude-code]
claude-haiku-4-5-20251001
claude-sonnet-4-6
[codex]
gpt-5.4-mini
# Empty sections hide these providers entirely
[vertex-gemini]
[vertex-anthropic][provider]headers must be one of:vertex-gemini,vertex-anthropic,gemini-api,claude-code,anthropic-api,codex,openai-api. Unknown headers are silently dropped.- Model IDs under a header are passed through to the provider verbatim, so you can list future model IDs (e.g. a newly released
claude-sonnet-5-0) without waiting for a git-ai release. - Delete the file to restore the full shipped catalog.
- See
examples/options.conffor a starter.
Terminal picker
Running git-ai commit or git-ai pr without a provider argument launches an inline fzf picker over the same provider/model combos Lazygit uses. History entries float to the top. Pass provider or provider:model to skip the picker. Flags still parse, so git-ai pr --base staging opens the picker then runs against the chosen base.
Set GIT_AI_NO_FZF=1 (or pipe stdout) to disable the picker for scripting. If fzf isn't installed, the tools fall back to the last saved choice.
Lazygit integration
Requires fzf on your PATH. Add the following under customCommands: in ~/.config/lazygit/config.yml:
customCommands:
- key: "<c-g>"
description: "AI commit message (git-ai + fzf)"
context: "files"
command: |
choice=$(git-ai options commit | fzf --delimiter='|' --with-nth=2 --no-sort --tiebreak=index --prompt='git-ai> ') || exit 0
git commit -m "$(git-ai commit "${choice%%|*}")" --edit
output: terminalPressing <c-g> in the files panel opens an fzf picker showing every auth+model combo (plus reuse saved message when available). Typeahead narrows instantly; Enter commits with the generated message. Selections float to the top of the list on subsequent invocations.
Compatibility
git-ai does not depend on a specific terminal UI. It works in the CLI, in Lazygit, and in similar git environments as long as Git exposes the required repository state:
git-ai commitneeds staged changes (git diff --staged)git-ai prneeds commits and diff data relative to a base branch
