npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, πŸ‘‹, I’m Ryan HefnerΒ  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you πŸ™

Β© 2026 – Pkg Stats / Ryan Hefner

@mantisware/commit-ai

v1.0.10

Published

Create amazing commits in just seconds. Say farewell to boring commits with AI! 🀯πŸ”₯

Readme


Install CommitAI as a CLI Tool

CommitAI lets you automate meaningful commit messages effortlessly using the CLI with cmt. In just two seconds, your staged changes are committed with an AI-generated message.

Installation

  1. Install CommitAI globally for use in any repository:

    pnpm add -g @mantisware/commit-ai
  2. Obtain an API key from OpenAI or another supported LLM provider. Ensure your OpenAI account has an active payment method for API access.

  3. Configure CommitAI with your API key:

    cmt config set CMT_API_KEY=<your_api_key>

    The API key is stored securely in ~/.commit-ai.

Usage

To generate a commit message for staged changes, run:

git add <files...>
cmt

Running git add is optionalβ€”cmt will automatically stage changes for you.

Running Locally with Ollama

You can also run CommitAI with a local model through Ollama:

  • Install and start Ollama.
  • Execute ollama run mistral (only once, to pull the model).
  • In your project directory, configure CommitAI:
git add <files...>
cmt config set CMT_AI_PROVIDER='ollama' CMT_MODEL='llama3:8b'

By default, the model used is mistral.

If Ollama runs on another machine or within Docker with GPU support, update the API endpoint:

cmt config set CMT_API_URL='http://192.168.1.10:11434/api/chat'

Replace 192.168.1.10 with the appropriate endpoint.

Running with DeepSeek Locally with LM Studio

You can also run CommitAI with a local model through LM Studio:

  • Install and start LM Studio.
  • Add the DeepSeekCoder model to your project. current: deepseek-coder-v2-lite-instruct or for macos deepseek-coder-v2-lite-instruct-mlx
  • In your ~/.commit-ai configure CommitAI:
cmt config set CMT_MODEL='deepseek-coder-v2-lite-instruct-mlx' CMT_API_URL='http://127.0.0.1:1234' CMT_AI_PROVIDER='deepseek'

Replace http://127.0.0.1:1234 with the appropriate endpoint provided by LM Studio.

Configuration Options

Local Repository Configuration

Add CommitAI configurations to a .env file in your repository:

CMT_AI_PROVIDER=<openai (default), anthropic, azure, ollama, gemini, flowise, mlx, deepseek>
CMT_API_KEY=<your OpenAI API token> # or another LLM provider API key
CMT_API_URL=<optional proxy path to OpenAI API>
CMT_TOKENS_MAX_INPUT=40960  # Maximum input tokens (optional, provider/model specific)
CMT_TOKENS_MAX_OUTPUT=4096  # Maximum output tokens (optional, provider/model specific)
CMT_DESCRIPTION=false  # Append a brief description of changes (default: false)
CMT_EMOJI=false  # Enable GitMoji support (default: false)
CMT_MODEL='gpt-4o-mini'  # AI model (default: 'gpt-4o-mini' for openai)
CMT_LANGUAGE='en'  # Language preference (default: 'en')
CMT_MESSAGE_TEMPLATE_PLACEHOLDER='$msg'  # Message template placeholder
CMT_PROMPT_MODULE='conventional-commit'  # Use 'conventional-commit' or '@commitlint'
CMT_ONE_LINE_COMMIT=false  # Single-line commit messages
CMT_WHY=false  # Focus description on WHY changes were made (vs WHAT changes are)
CMT_SML=false  # Generate condensed single-line messages per file with filename, line numbers, and brief description
CMT_DEBUG=false  # Enable debug logging for troubleshooting
CMT_MAX_FILES=50  # Maximum number of files allowed in a single commit (optional)
CMT_MAX_DIFF_BYTES=102400  # Maximum diff size in bytes (100 KB, optional)
CMT_REVIEW_MIN_SCORE=70  # Minimum code quality score (0-100) required when using --review flag (optional)

Global Configuration

Global settings are stored in ~/.commit-ai and configured with:

cmt config set CMT_MODEL=gpt-4o

Local settings take precedence over global configurations.

Enable Full GitMoji Support

By default, CommitAI limits GitMoji to 10 emojis (πŸ›βœ¨πŸ“πŸš€βœ…β™»οΈβ¬†οΈπŸ”§πŸŒπŸ’‘) to optimize API usage. To enable full GitMoji support:

cmt --fgm

Ensure CMT_EMOJI is set to true.

Skip Commit Confirmation

To commit changes without requiring manual confirmation:

cmt --yes

Advanced CLI Options

Dry Run Mode - Generate commit message without actually committing:

cmt --dry-run

Edit Before Committing - Open generated message in your $EDITOR before committing:

cmt --edit  # or -e

Skip Push Prompts - Commit without being prompted to push:

cmt --no-push

Stage All & Commit - Non-interactively stage all files and commit:

cmt --stage-all  # or -a

These flags can be combined:

cmt --stage-all --edit --no-push

Single-line Multi-file Log (SML Mode)

For large commits where you want a quick overview, enable SML mode to generate condensed per-file messages:

cmt config set CMT_SML=true

Example output format:

src/commands/config.ts:L29-L32 - Added CMT_SML configuration option
src/prompts.ts:L122-L125 - Implemented SML instruction generator
README.md:L105 - Documented SML feature

Each line shows:

  • Filename with relative path
  • Line numbers or ranges where changes occurred
  • Brief description of what changed

This is particularly useful for:

  • Code reviews of large changesets
  • Quick scanning of multi-file commits
  • Understanding the scope of changes at a glance

Commit Size Guardrails

Prevent accidentally committing too many files or too much code at once by setting limits:

Limit Maximum Files - Reject commits with more than N files:

cmt config set CMT_MAX_FILES=50

Limit Maximum Diff Size - Reject commits when diff exceeds N bytes:

cmt config set CMT_MAX_DIFF_BYTES=102400  # 100 KB

When a limit is exceeded, CommitAI will display a clear error with actionable suggestions:

  • Split changes into smaller, focused commits
  • Unstage some files
  • Adjust the configured limits

These guardrails help maintain code review quality and encourage atomic commits.

AI-Powered Code Review

CommitAI includes a comprehensive code review feature that analyzes your staged changes for security vulnerabilities, performance issues, code quality, and best practices.

Running a Code Review

Analyze your staged changes before committing:

# Stage your changes
git add <files>

# Run code review
cmt review

Review Categories

The AI reviewer analyzes code across multiple dimensions:

  • Security: SQL injection, XSS, authentication issues, exposed secrets
  • Performance: Inefficient algorithms, memory leaks, bottlenecks
  • Best Practices: Design patterns, language conventions, industry standards
  • Code Quality: Readability, maintainability, naming conventions
  • Bugs & Edge Cases: Potential bugs, race conditions, null pointers
  • Style: Formatting consistency, code organization

Review Output

Each review provides:

  • Summary: Brief overview of code quality
  • Overall Score: 0-100 quality score
  • Recommendation:
    • APPROVED (80-100): Ready to commit
    • REVIEW SUGGESTED (50-79): Address findings
    • BLOCKED (0-49): Fix critical issues
  • Detailed Findings: Categorized issues with severity levels, descriptions, and suggestions

Example Output

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Code Review Results                                                                                β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                                                    β”‚
β”‚  The code introduces a new authentication endpoint with good structure but has a critical         β”‚
β”‚  security vulnerability related to password handling and lacks input validation.                  β”‚
β”‚                                                                                                    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Overall Quality Score: 65/100                                                                     β”‚
β”‚  Recommendation: ! REVIEW SUGGESTED - Address findings                                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Findings (3)                                                                                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                                                    β”‚
β”‚  βœ– SECURITY - Plain text password storage                                                         β”‚
β”‚    πŸ“ src/auth/login.ts:L45                                                                        β”‚
β”‚    Passwords are being stored in plain text without hashing. This is a critical security          β”‚
β”‚    vulnerability that exposes user credentials.                                                   β”‚
β”‚    πŸ’‘ Suggestion:                                                                                  β”‚
β”‚    Use bcrypt or argon2 to hash passwords before storage                                          β”‚
β”‚                                                                                                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

JSON Output

For integration with CI/CD pipelines:

cmt review --json > review-results.json

Exit Codes

  • 0: Review passed (approve or review)
  • 1: Critical issues found (blocked) or error

Automatic Review Before Commit

Use the --review (or -r) flag to automatically run code review before committing:

# Stage files and commit with automatic review
git add <files>
cmt --review

# Or use the short flag
cmt -r

The review will run automatically, and you'll be prompted to continue or abort based on the results:

  • APPROVED: Automatically proceeds with commit
  • REVIEW SUGGESTED: Prompts you to continue or abort
  • BLOCKED: Prompts you (defaulting to abort) due to critical issues

Quality Score Threshold

Set a minimum quality score that code must achieve before committing:

# Require minimum score of 70
cmt config set CMT_REVIEW_MIN_SCORE=70

When set, commits with scores below the threshold will automatically be blocked:

$ cmt --review
βœ– Code quality score (65) is below the minimum threshold (70).
Please improve the code or adjust the threshold: cmt config set CMT_REVIEW_MIN_SCORE <number>

This is useful for:

  • Enforcing code quality standards across teams
  • Preventing commits with critical security or performance issues
  • Maintaining consistent quality in CI/CD pipelines

Code Standards Configuration

Configure project-specific code standards to get more targeted review feedback:

# Import from popular style guides
cmt standards import

# Available style guides:
# - React + TypeScript (Airbnb)
# - Angular + TypeScript
# - Vue 3 + TypeScript
# - Node.js + Express
# - Python (PEP 8)
# - Java (Google Style)
# - Go (Golang)
# - Rust
# - TypeScript (Strict)
# - C# (.NET)

# View current standards
cmt standards view

# Create custom standards interactively
cmt standards set

How it works:

  1. Standards are stored in .commit-ai-standards file in your repository root
  2. When you run cmt review or cmt --review, the AI uses these standards for analysis
  3. Review findings will specifically call out violations of your configured standards
  4. You'll be prompted to configure standards on first review (can proceed without them)

Example workflow:

# First time setup
cmt standards import  # Choose React + TypeScript
git add .commit-ai-standards
git commit -m "Add code review standards"

# Now reviews use your standards
cmt review

Excluding Files from Review

Create a .commit-ai-review-ignore file in your repository root to exclude specific files or patterns from code review:

# .commit-ai-review-ignore
*.test.ts
*.spec.js
test/**
docs/**
*.md
generated/**
*.lock

The syntax is the same as .gitignore. Files matching these patterns will be excluded from AI analysis but still included in commits.

Use cases:

  • Exclude test files from review to focus on production code
  • Skip generated code or vendor files
  • Ignore documentation files to reduce AI token usage
  • Exclude files that don't need quality checks

Note: This only affects code review (cmt review and cmt --review). For excluding files from commit message generation, use .commit-aiignore instead.

Review Caching

CommitAI automatically caches review results to avoid re-analyzing unchanged code:

# Reviews are cached automatically (default TTL: 24 hours)
cmt review  # First run - performs AI analysis
cmt review  # Second run - uses cached result if diff unchanged

# Force fresh review (skip cache)
cmt review --no-cache

# View cache statistics
cmt review cache-stats

# Clear cache manually
cmt review clear-cache

Cache behavior:

  • Results cached based on diff content hash + code standards hash
  • Default TTL: 24 hours (configurable)
  • Cache stored in ~/.commit-ai-cache/
  • Automatically cleans expired entries
  • Separate cache entries for different code standards

Configuration:

# Set cache TTL in hours (max 168 hours / 7 days)
cmt config set CMT_REVIEW_CACHE_TTL=48

# Disable caching completely
cmt config set CMT_REVIEW_CACHE_DISABLED=true

When cache is used:

  • Same diff content (no code changes)
  • Same code standards configuration
  • Cache entry not expired

When cache is skipped:

  • Code changes detected (diff hash changes)
  • Code standards modified
  • Cache expired or disabled
  • --no-cache flag used

Workflow Integration

# Review before every commit
git add <files>
cmt review && cmt

# Automatic review with commit
cmt --review

# With quality threshold enforced
cmt config set CMT_REVIEW_MIN_SCORE=70
cmt --review

# Or use in a pre-commit hook
cmt review || exit 1

Generate PR Descriptions & Changelogs

CommitAI can generate pull request descriptions and changelogs from your git diffs.

PR Description Generation

Generate a comprehensive PR description comparing your current branch with a base branch:

# Compare with default base branch (main/master)
cmt pr

# Compare with specific branch
cmt pr develop

# Save to file
cmt pr develop --output pr-description.md

Generated PR descriptions include:

  • Concise title (max 72 characters)
  • Summary of changes
  • Categorized changes (Features, Bug Fixes, Refactoring, etc.)
  • Technical details and implementation notes
  • Testing notes
  • Breaking changes (if applicable)

The output is formatted in markdown and ready to paste into GitHub/GitLab/Bitbucket.

Changelog Generation

Generate changelog entries following the Keep a Changelog format:

# Generate changelog for version (compare base branch to HEAD)
cmt changelog v1.2.0

# Specify from and to refs
cmt changelog v1.2.0 v1.1.0 HEAD

# Save to CHANGELOG.md (default)
cmt changelog v1.2.0 --output CHANGELOG.md

# Append to existing changelog
cmt changelog v1.2.0 --append

Generated changelogs include:

  • Version number and date
  • Changes grouped by type (Added, Changed, Fixed, Deprecated, Removed, Security)
  • Present tense, imperative mood
  • Specific, actionable descriptions

Example Workflow

# 1. Create feature branch and make changes
git checkout -b feature/new-dashboard

# ... make changes ...

# 2. Generate commit messages as you work
git add <files>
cmt

# 3. When ready for PR, generate description
cmt pr main --output pr-description.md

# 4. Create PR with generated description
gh pr create --title "Add new dashboard" --body-file pr-description.md

# 5. When releasing, generate changelog
cmt changelog v2.0.0 v1.9.0 HEAD

Provider-Specific Configuration

CommitAI supports multiple AI providers. Below are detailed setup instructions for each provider.

OpenAI (Default)

Required Environment Variables:

CMT_AI_PROVIDER=openai
CMT_API_KEY=sk-...  # Get from https://platform.openai.com/api-keys

Recommended Models:

  • gpt-4o-mini (default, fastest, cost-effective)
  • gpt-4o (most capable)
  • gpt-3.5-turbo (budget option)

Token Limits: Configure based on your chosen model (see OpenAI pricing)


Anthropic Claude

Required Environment Variables:

CMT_AI_PROVIDER=anthropic
CMT_API_KEY=sk-ant-...  # Get from https://console.anthropic.com/
CMT_MODEL=claude-3-5-sonnet-20240620

Available Models:

  • claude-3-5-sonnet-20240620 (recommended, balanced performance)
  • claude-3-opus-20240229 (most capable)
  • claude-3-haiku-20240307 (fastest, budget-friendly)

Token Limits: Claude models support 200K tokens input by default


Google Gemini

Required Environment Variables:

CMT_AI_PROVIDER=gemini
CMT_API_KEY=AIza...  # Get from https://makersuite.google.com/app/apikey
CMT_MODEL=gemini-1.5-flash

Available Models:

  • gemini-1.5-flash (recommended, fast and cost-effective)
  • gemini-1.5-pro (most capable)
  • gemini-1.0-pro (stable)

Token Limits: Gemini 1.5 models support up to 1M tokens input


Azure OpenAI

Required Environment Variables:

CMT_AI_PROVIDER=azure
CMT_API_KEY=your-azure-key
CMT_API_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview
CMT_MODEL=your-deployment-name

Setup: Requires an Azure OpenAI service deployment. See Azure OpenAI docs


Groq

Required Environment Variables:

CMT_AI_PROVIDER=groq
CMT_API_KEY=gsk_...  # Get from https://console.groq.com/keys
CMT_MODEL=llama3-70b-8192

Available Models:

  • llama3-70b-8192 (recommended, no daily token limit)
  • llama-3.1-70b-versatile (latest)
  • llama3-8b-8192 (fastest)
  • gemma2-9b-it (Google's Gemma)

Note: Groq provides extremely fast inference with generous rate limits


Mistral AI

Required Environment Variables:

CMT_AI_PROVIDER=mistral
CMT_API_KEY=...  # Get from https://console.mistral.ai/
CMT_MODEL=ministral-8b-latest

Recommended Models:

  • ministral-8b-latest (fast, cost-effective)
  • mistral-large-latest (most capable)
  • codestral-latest (optimized for code)

DeepSeek

Required Environment Variables:

CMT_AI_PROVIDER=deepseek
CMT_API_KEY=...  # Get from https://platform.deepseek.com/
CMT_MODEL=deepseek-chat

Available Models:

  • deepseek-chat (general purpose)
  • deepseek-coder (optimized for code)
  • deepseek-reasoner (enhanced reasoning)

Ollama (Local)

Setup:

  1. Install Ollama from https://ollama.ai/
  2. Pull a model: ollama pull llama3:8b
  3. Configure CommitAI:
CMT_AI_PROVIDER=ollama
CMT_MODEL=llama3:8b
CMT_API_URL=http://localhost:11434/api/chat  # Optional, default

Popular Models:

  • llama3:8b (recommended, fast)
  • mistral (balanced)
  • codellama:7b (code-focused)

Remote Ollama: Set CMT_API_URL to your remote Ollama endpoint


MLX (Apple Silicon Local)

Setup:

  1. Install MLX LM from https://github.com/ml-explore/mlx-examples
  2. Start the server
  3. Configure CommitAI:
CMT_AI_PROVIDER=mlx
CMT_API_URL=http://localhost:8080
CMT_MODEL=your-mlx-model

Note: Optimized for Apple Silicon (M1/M2/M3)


Flowise

Setup: For custom Flowise deployments:

CMT_AI_PROVIDER=flowise
CMT_API_URL=http://localhost:3000/api/v1/prediction/your-chatflow-id
CMT_API_KEY=your-flowise-api-key  # If authentication enabled

Test Provider

For development and testing:

CMT_AI_PROVIDER=test
CMT_TEST_MOCK_TYPE=commit-message  # or 'commit-message-description'

Note: Returns mock responses without calling any AI API

Ignore Files from AI Processing

Prevent CommitAI from processing certain files by creating a .commit-aiignore file:

path/to/large-asset.zip
**/*.jpg

By default, CommitAI ignores files like *-lock.* and *.lock.

Set Up CommitAI as a Git Hook

CommitAI can integrate as a Git prepare-commit-msg hook for seamless commit message generation within your IDE.

To enable:

cmt hook set

To disable:

cmt hook unset

To use the hook:

git add <files...>
git commit

Use CommitAI in GitHub Actions (BETA) πŸ”₯

CommitAI can enhance commit messages automatically when pushing to a remote repository.

Safety Rails

The GitHub Action includes safety rails to prevent accidental force pushes to protected branches:

  • enable_force_push: Must be explicitly set to true to enable force pushing (default: false)
  • allowed_branches: Comma-separated list of branches to allow (default: all branches)
  • require_confirmation: Issues warnings when force pushing to protected branches (default: true)

Protected branches (main, master, production, prod) require explicit opt-in for force pushing.

Basic Setup (Safe Mode - No Force Push)

Create .github/workflows/commit-ai.yml:

name: 'CommitAI Action'

on:
  push:
    branches: [develop, feature/*]  # Only run on non-protected branches

jobs:
  commit-ai:
    runs-on: ubuntu-latest
    permissions: write-all
    steps:
      - name: Set Up Node.js
        uses: actions/setup-node@v2
        with:
          node-version: '16'
      - uses: actions/checkout@v3
      - uses: MantisWare/[email protected]
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          # enable_force_push: false (default - rebases locally but doesn't push)
          allowed_branches: 'develop,feature/*'
        env:
          CMT_API_KEY: ${{ secrets.CMT_API_KEY }}
          CMT_MODEL: gpt-4o-mini
          CMT_LANGUAGE: en

Advanced Setup (With Force Push)

⚠️ WARNING: Force pushing rewrites Git history. Only use on non-protected branches or with team agreement.

name: 'CommitAI Action'

on:
  push:
    branches: [develop]  # Specific branch only

jobs:
  commit-ai:
    runs-on: ubuntu-latest
    permissions: write-all
    steps:
      - name: Set Up Node.js
        uses: actions/setup-node@v2
        with:
          node-version: '16'
      - uses: actions/checkout@v3
      - uses: MantisWare/[email protected]
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          enable_force_push: true  # Explicitly enable force push
          allowed_branches: 'develop'  # Only allow on develop branch
          require_confirmation: true  # Warn on protected branches
        env:
          CMT_API_KEY: ${{ secrets.CMT_API_KEY }}
          CMT_MODEL: gpt-4o-mini
          CMT_LANGUAGE: en

Important: Ensure the OpenAI API key is stored as a GitHub secret (CMT_API_KEY).

Payment Information

CommitAI uses OpenAI API, and you are responsible for associated costs. By default, it uses gpt-3.5-turbo, which should not exceed $0.10 per workday. Upgrading to gpt-4o improves quality but increases cost.