@mantisware/commit-ai
v1.0.10
Published
Create amazing commits in just seconds. Say farewell to boring commits with AI! π€―π₯
Maintainers
Readme
Install CommitAI as a CLI Tool
CommitAI lets you automate meaningful commit messages effortlessly using the CLI with cmt. In just two seconds, your staged changes are committed with an AI-generated message.
Installation
Install CommitAI globally for use in any repository:
pnpm add -g @mantisware/commit-aiObtain an API key from OpenAI or another supported LLM provider. Ensure your OpenAI account has an active payment method for API access.
Configure CommitAI with your API key:
cmt config set CMT_API_KEY=<your_api_key>The API key is stored securely in
~/.commit-ai.
Usage
To generate a commit message for staged changes, run:
git add <files...>
cmtRunning git add is optionalβcmt will automatically stage changes for you.
Running Locally with Ollama
You can also run CommitAI with a local model through Ollama:
- Install and start Ollama.
- Execute
ollama run mistral(only once, to pull the model). - In your project directory, configure CommitAI:
git add <files...>
cmt config set CMT_AI_PROVIDER='ollama' CMT_MODEL='llama3:8b'By default, the model used is mistral.
If Ollama runs on another machine or within Docker with GPU support, update the API endpoint:
cmt config set CMT_API_URL='http://192.168.1.10:11434/api/chat'Replace 192.168.1.10 with the appropriate endpoint.
Running with DeepSeek Locally with LM Studio
You can also run CommitAI with a local model through LM Studio:
- Install and start LM Studio.
- Add the DeepSeekCoder model to your project. current:
deepseek-coder-v2-lite-instructor for macosdeepseek-coder-v2-lite-instruct-mlx - In your
~/.commit-aiconfigure CommitAI:
cmt config set CMT_MODEL='deepseek-coder-v2-lite-instruct-mlx' CMT_API_URL='http://127.0.0.1:1234' CMT_AI_PROVIDER='deepseek'Replace http://127.0.0.1:1234 with the appropriate endpoint provided by LM Studio.
Configuration Options
Local Repository Configuration
Add CommitAI configurations to a .env file in your repository:
CMT_AI_PROVIDER=<openai (default), anthropic, azure, ollama, gemini, flowise, mlx, deepseek>
CMT_API_KEY=<your OpenAI API token> # or another LLM provider API key
CMT_API_URL=<optional proxy path to OpenAI API>
CMT_TOKENS_MAX_INPUT=40960 # Maximum input tokens (optional, provider/model specific)
CMT_TOKENS_MAX_OUTPUT=4096 # Maximum output tokens (optional, provider/model specific)
CMT_DESCRIPTION=false # Append a brief description of changes (default: false)
CMT_EMOJI=false # Enable GitMoji support (default: false)
CMT_MODEL='gpt-4o-mini' # AI model (default: 'gpt-4o-mini' for openai)
CMT_LANGUAGE='en' # Language preference (default: 'en')
CMT_MESSAGE_TEMPLATE_PLACEHOLDER='$msg' # Message template placeholder
CMT_PROMPT_MODULE='conventional-commit' # Use 'conventional-commit' or '@commitlint'
CMT_ONE_LINE_COMMIT=false # Single-line commit messages
CMT_WHY=false # Focus description on WHY changes were made (vs WHAT changes are)
CMT_SML=false # Generate condensed single-line messages per file with filename, line numbers, and brief description
CMT_DEBUG=false # Enable debug logging for troubleshooting
CMT_MAX_FILES=50 # Maximum number of files allowed in a single commit (optional)
CMT_MAX_DIFF_BYTES=102400 # Maximum diff size in bytes (100 KB, optional)
CMT_REVIEW_MIN_SCORE=70 # Minimum code quality score (0-100) required when using --review flag (optional)Global Configuration
Global settings are stored in ~/.commit-ai and configured with:
cmt config set CMT_MODEL=gpt-4oLocal settings take precedence over global configurations.
Enable Full GitMoji Support
By default, CommitAI limits GitMoji to 10 emojis (πβ¨ππβ β»οΈβ¬οΈπ§ππ‘) to optimize API usage. To enable full GitMoji support:
cmt --fgmEnsure CMT_EMOJI is set to true.
Skip Commit Confirmation
To commit changes without requiring manual confirmation:
cmt --yesAdvanced CLI Options
Dry Run Mode - Generate commit message without actually committing:
cmt --dry-runEdit Before Committing - Open generated message in your $EDITOR before committing:
cmt --edit # or -eSkip Push Prompts - Commit without being prompted to push:
cmt --no-pushStage All & Commit - Non-interactively stage all files and commit:
cmt --stage-all # or -aThese flags can be combined:
cmt --stage-all --edit --no-pushSingle-line Multi-file Log (SML Mode)
For large commits where you want a quick overview, enable SML mode to generate condensed per-file messages:
cmt config set CMT_SML=true
Example output format:
src/commands/config.ts:L29-L32 - Added CMT_SML configuration option
src/prompts.ts:L122-L125 - Implemented SML instruction generator
README.md:L105 - Documented SML featureEach line shows:
- Filename with relative path
- Line numbers or ranges where changes occurred
- Brief description of what changed
This is particularly useful for:
- Code reviews of large changesets
- Quick scanning of multi-file commits
- Understanding the scope of changes at a glance
Commit Size Guardrails
Prevent accidentally committing too many files or too much code at once by setting limits:
Limit Maximum Files - Reject commits with more than N files:
cmt config set CMT_MAX_FILES=50Limit Maximum Diff Size - Reject commits when diff exceeds N bytes:
cmt config set CMT_MAX_DIFF_BYTES=102400 # 100 KBWhen a limit is exceeded, CommitAI will display a clear error with actionable suggestions:
- Split changes into smaller, focused commits
- Unstage some files
- Adjust the configured limits
These guardrails help maintain code review quality and encourage atomic commits.
AI-Powered Code Review
CommitAI includes a comprehensive code review feature that analyzes your staged changes for security vulnerabilities, performance issues, code quality, and best practices.
Running a Code Review
Analyze your staged changes before committing:
# Stage your changes
git add <files>
# Run code review
cmt reviewReview Categories
The AI reviewer analyzes code across multiple dimensions:
- Security: SQL injection, XSS, authentication issues, exposed secrets
- Performance: Inefficient algorithms, memory leaks, bottlenecks
- Best Practices: Design patterns, language conventions, industry standards
- Code Quality: Readability, maintainability, naming conventions
- Bugs & Edge Cases: Potential bugs, race conditions, null pointers
- Style: Formatting consistency, code organization
Review Output
Each review provides:
- Summary: Brief overview of code quality
- Overall Score: 0-100 quality score
- Recommendation:
APPROVED(80-100): Ready to commitREVIEW SUGGESTED(50-79): Address findingsBLOCKED(0-49): Fix critical issues
- Detailed Findings: Categorized issues with severity levels, descriptions, and suggestions
Example Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Code Review Results β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β The code introduces a new authentication endpoint with good structure but has a critical β
β security vulnerability related to password handling and lacks input validation. β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Overall Quality Score: 65/100 β
β Recommendation: ! REVIEW SUGGESTED - Address findings β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Findings (3) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β β SECURITY - Plain text password storage β
β π src/auth/login.ts:L45 β
β Passwords are being stored in plain text without hashing. This is a critical security β
β vulnerability that exposes user credentials. β
β π‘ Suggestion: β
β Use bcrypt or argon2 to hash passwords before storage β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββJSON Output
For integration with CI/CD pipelines:
cmt review --json > review-results.jsonExit Codes
0: Review passed (approve or review)1: Critical issues found (blocked) or error
Automatic Review Before Commit
Use the --review (or -r) flag to automatically run code review before committing:
# Stage files and commit with automatic review
git add <files>
cmt --review
# Or use the short flag
cmt -rThe review will run automatically, and you'll be prompted to continue or abort based on the results:
- APPROVED: Automatically proceeds with commit
- REVIEW SUGGESTED: Prompts you to continue or abort
- BLOCKED: Prompts you (defaulting to abort) due to critical issues
Quality Score Threshold
Set a minimum quality score that code must achieve before committing:
# Require minimum score of 70
cmt config set CMT_REVIEW_MIN_SCORE=70When set, commits with scores below the threshold will automatically be blocked:
$ cmt --review
β Code quality score (65) is below the minimum threshold (70).
Please improve the code or adjust the threshold: cmt config set CMT_REVIEW_MIN_SCORE <number>This is useful for:
- Enforcing code quality standards across teams
- Preventing commits with critical security or performance issues
- Maintaining consistent quality in CI/CD pipelines
Code Standards Configuration
Configure project-specific code standards to get more targeted review feedback:
# Import from popular style guides
cmt standards import
# Available style guides:
# - React + TypeScript (Airbnb)
# - Angular + TypeScript
# - Vue 3 + TypeScript
# - Node.js + Express
# - Python (PEP 8)
# - Java (Google Style)
# - Go (Golang)
# - Rust
# - TypeScript (Strict)
# - C# (.NET)
# View current standards
cmt standards view
# Create custom standards interactively
cmt standards setHow it works:
- Standards are stored in
.commit-ai-standardsfile in your repository root - When you run
cmt revieworcmt --review, the AI uses these standards for analysis - Review findings will specifically call out violations of your configured standards
- You'll be prompted to configure standards on first review (can proceed without them)
Example workflow:
# First time setup
cmt standards import # Choose React + TypeScript
git add .commit-ai-standards
git commit -m "Add code review standards"
# Now reviews use your standards
cmt reviewExcluding Files from Review
Create a .commit-ai-review-ignore file in your repository root to exclude specific files or patterns from code review:
# .commit-ai-review-ignore
*.test.ts
*.spec.js
test/**
docs/**
*.md
generated/**
*.lockThe syntax is the same as .gitignore. Files matching these patterns will be excluded from AI analysis but still included in commits.
Use cases:
- Exclude test files from review to focus on production code
- Skip generated code or vendor files
- Ignore documentation files to reduce AI token usage
- Exclude files that don't need quality checks
Note: This only affects code review (cmt review and cmt --review). For excluding files from commit message generation, use .commit-aiignore instead.
Review Caching
CommitAI automatically caches review results to avoid re-analyzing unchanged code:
# Reviews are cached automatically (default TTL: 24 hours)
cmt review # First run - performs AI analysis
cmt review # Second run - uses cached result if diff unchanged
# Force fresh review (skip cache)
cmt review --no-cache
# View cache statistics
cmt review cache-stats
# Clear cache manually
cmt review clear-cacheCache behavior:
- Results cached based on diff content hash + code standards hash
- Default TTL: 24 hours (configurable)
- Cache stored in
~/.commit-ai-cache/ - Automatically cleans expired entries
- Separate cache entries for different code standards
Configuration:
# Set cache TTL in hours (max 168 hours / 7 days)
cmt config set CMT_REVIEW_CACHE_TTL=48
# Disable caching completely
cmt config set CMT_REVIEW_CACHE_DISABLED=trueWhen cache is used:
- Same diff content (no code changes)
- Same code standards configuration
- Cache entry not expired
When cache is skipped:
- Code changes detected (diff hash changes)
- Code standards modified
- Cache expired or disabled
--no-cacheflag used
Workflow Integration
# Review before every commit
git add <files>
cmt review && cmt
# Automatic review with commit
cmt --review
# With quality threshold enforced
cmt config set CMT_REVIEW_MIN_SCORE=70
cmt --review
# Or use in a pre-commit hook
cmt review || exit 1Generate PR Descriptions & Changelogs
CommitAI can generate pull request descriptions and changelogs from your git diffs.
PR Description Generation
Generate a comprehensive PR description comparing your current branch with a base branch:
# Compare with default base branch (main/master)
cmt pr
# Compare with specific branch
cmt pr develop
# Save to file
cmt pr develop --output pr-description.mdGenerated PR descriptions include:
- Concise title (max 72 characters)
- Summary of changes
- Categorized changes (Features, Bug Fixes, Refactoring, etc.)
- Technical details and implementation notes
- Testing notes
- Breaking changes (if applicable)
The output is formatted in markdown and ready to paste into GitHub/GitLab/Bitbucket.
Changelog Generation
Generate changelog entries following the Keep a Changelog format:
# Generate changelog for version (compare base branch to HEAD)
cmt changelog v1.2.0
# Specify from and to refs
cmt changelog v1.2.0 v1.1.0 HEAD
# Save to CHANGELOG.md (default)
cmt changelog v1.2.0 --output CHANGELOG.md
# Append to existing changelog
cmt changelog v1.2.0 --appendGenerated changelogs include:
- Version number and date
- Changes grouped by type (Added, Changed, Fixed, Deprecated, Removed, Security)
- Present tense, imperative mood
- Specific, actionable descriptions
Example Workflow
# 1. Create feature branch and make changes
git checkout -b feature/new-dashboard
# ... make changes ...
# 2. Generate commit messages as you work
git add <files>
cmt
# 3. When ready for PR, generate description
cmt pr main --output pr-description.md
# 4. Create PR with generated description
gh pr create --title "Add new dashboard" --body-file pr-description.md
# 5. When releasing, generate changelog
cmt changelog v2.0.0 v1.9.0 HEADProvider-Specific Configuration
CommitAI supports multiple AI providers. Below are detailed setup instructions for each provider.
OpenAI (Default)
Required Environment Variables:
CMT_AI_PROVIDER=openai
CMT_API_KEY=sk-... # Get from https://platform.openai.com/api-keysRecommended Models:
gpt-4o-mini(default, fastest, cost-effective)gpt-4o(most capable)gpt-3.5-turbo(budget option)
Token Limits: Configure based on your chosen model (see OpenAI pricing)
Anthropic Claude
Required Environment Variables:
CMT_AI_PROVIDER=anthropic
CMT_API_KEY=sk-ant-... # Get from https://console.anthropic.com/
CMT_MODEL=claude-3-5-sonnet-20240620Available Models:
claude-3-5-sonnet-20240620(recommended, balanced performance)claude-3-opus-20240229(most capable)claude-3-haiku-20240307(fastest, budget-friendly)
Token Limits: Claude models support 200K tokens input by default
Google Gemini
Required Environment Variables:
CMT_AI_PROVIDER=gemini
CMT_API_KEY=AIza... # Get from https://makersuite.google.com/app/apikey
CMT_MODEL=gemini-1.5-flashAvailable Models:
gemini-1.5-flash(recommended, fast and cost-effective)gemini-1.5-pro(most capable)gemini-1.0-pro(stable)
Token Limits: Gemini 1.5 models support up to 1M tokens input
Azure OpenAI
Required Environment Variables:
CMT_AI_PROVIDER=azure
CMT_API_KEY=your-azure-key
CMT_API_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview
CMT_MODEL=your-deployment-nameSetup: Requires an Azure OpenAI service deployment. See Azure OpenAI docs
Groq
Required Environment Variables:
CMT_AI_PROVIDER=groq
CMT_API_KEY=gsk_... # Get from https://console.groq.com/keys
CMT_MODEL=llama3-70b-8192Available Models:
llama3-70b-8192(recommended, no daily token limit)llama-3.1-70b-versatile(latest)llama3-8b-8192(fastest)gemma2-9b-it(Google's Gemma)
Note: Groq provides extremely fast inference with generous rate limits
Mistral AI
Required Environment Variables:
CMT_AI_PROVIDER=mistral
CMT_API_KEY=... # Get from https://console.mistral.ai/
CMT_MODEL=ministral-8b-latestRecommended Models:
ministral-8b-latest(fast, cost-effective)mistral-large-latest(most capable)codestral-latest(optimized for code)
DeepSeek
Required Environment Variables:
CMT_AI_PROVIDER=deepseek
CMT_API_KEY=... # Get from https://platform.deepseek.com/
CMT_MODEL=deepseek-chatAvailable Models:
deepseek-chat(general purpose)deepseek-coder(optimized for code)deepseek-reasoner(enhanced reasoning)
Ollama (Local)
Setup:
- Install Ollama from https://ollama.ai/
- Pull a model:
ollama pull llama3:8b - Configure CommitAI:
CMT_AI_PROVIDER=ollama
CMT_MODEL=llama3:8b
CMT_API_URL=http://localhost:11434/api/chat # Optional, defaultPopular Models:
llama3:8b(recommended, fast)mistral(balanced)codellama:7b(code-focused)
Remote Ollama: Set CMT_API_URL to your remote Ollama endpoint
MLX (Apple Silicon Local)
Setup:
- Install MLX LM from https://github.com/ml-explore/mlx-examples
- Start the server
- Configure CommitAI:
CMT_AI_PROVIDER=mlx
CMT_API_URL=http://localhost:8080
CMT_MODEL=your-mlx-modelNote: Optimized for Apple Silicon (M1/M2/M3)
Flowise
Setup: For custom Flowise deployments:
CMT_AI_PROVIDER=flowise
CMT_API_URL=http://localhost:3000/api/v1/prediction/your-chatflow-id
CMT_API_KEY=your-flowise-api-key # If authentication enabledTest Provider
For development and testing:
CMT_AI_PROVIDER=test
CMT_TEST_MOCK_TYPE=commit-message # or 'commit-message-description'Note: Returns mock responses without calling any AI API
Ignore Files from AI Processing
Prevent CommitAI from processing certain files by creating a .commit-aiignore file:
path/to/large-asset.zip
**/*.jpgBy default, CommitAI ignores files like *-lock.* and *.lock.
Set Up CommitAI as a Git Hook
CommitAI can integrate as a Git prepare-commit-msg hook for seamless commit message generation within your IDE.
To enable:
cmt hook setTo disable:
cmt hook unsetTo use the hook:
git add <files...>
git commitUse CommitAI in GitHub Actions (BETA) π₯
CommitAI can enhance commit messages automatically when pushing to a remote repository.
Safety Rails
The GitHub Action includes safety rails to prevent accidental force pushes to protected branches:
enable_force_push: Must be explicitly set totrueto enable force pushing (default:false)allowed_branches: Comma-separated list of branches to allow (default: all branches)require_confirmation: Issues warnings when force pushing to protected branches (default:true)
Protected branches (main, master, production, prod) require explicit opt-in for force pushing.
Basic Setup (Safe Mode - No Force Push)
Create .github/workflows/commit-ai.yml:
name: 'CommitAI Action'
on:
push:
branches: [develop, feature/*] # Only run on non-protected branches
jobs:
commit-ai:
runs-on: ubuntu-latest
permissions: write-all
steps:
- name: Set Up Node.js
uses: actions/setup-node@v2
with:
node-version: '16'
- uses: actions/checkout@v3
- uses: MantisWare/[email protected]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# enable_force_push: false (default - rebases locally but doesn't push)
allowed_branches: 'develop,feature/*'
env:
CMT_API_KEY: ${{ secrets.CMT_API_KEY }}
CMT_MODEL: gpt-4o-mini
CMT_LANGUAGE: enAdvanced Setup (With Force Push)
β οΈ WARNING: Force pushing rewrites Git history. Only use on non-protected branches or with team agreement.
name: 'CommitAI Action'
on:
push:
branches: [develop] # Specific branch only
jobs:
commit-ai:
runs-on: ubuntu-latest
permissions: write-all
steps:
- name: Set Up Node.js
uses: actions/setup-node@v2
with:
node-version: '16'
- uses: actions/checkout@v3
- uses: MantisWare/[email protected]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
enable_force_push: true # Explicitly enable force push
allowed_branches: 'develop' # Only allow on develop branch
require_confirmation: true # Warn on protected branches
env:
CMT_API_KEY: ${{ secrets.CMT_API_KEY }}
CMT_MODEL: gpt-4o-mini
CMT_LANGUAGE: enImportant: Ensure the OpenAI API key is stored as a GitHub secret (CMT_API_KEY).
Payment Information
CommitAI uses OpenAI API, and you are responsible for associated costs.
By default, it uses gpt-3.5-turbo, which should not exceed $0.10 per workday. Upgrading to gpt-4o improves quality but increases cost.
