@bragdoc/cli
v1.5.1
Published
CLI tool for managing your brag document
Maintainers
Readme
bragdoc CLI
Automatically track, score, and summarize your professional achievements from Git commits.
The bragdoc CLI intelligently analyzes your Git repositories to extract and document your professional contributions. It automatically identifies meaningful work from commit messages, code changes, and project history, then scores and summarizes your achievements to build a comprehensive professional brag document.
✨ Key Features
- 🤖 Intelligent Achievement Extraction: Automatically identifies and scores meaningful work from Git commits
- 🔌 Multi-Provider LLM Support: Choose from OpenAI, Anthropic, Google, DeepSeek, Ollama, or any OpenAI-compatible API
- 📅 Scheduled Automation: Set up automatic extractions on any schedule (hourly, daily, custom)
- 🔍 Multi-Project Support: Track achievements across unlimited projects simultaneously
- ⚡ Smart Caching: Processes only new commits, avoiding duplicate work
- 🌍 Cross-Platform Scheduling: Native system integration (cron, Task Scheduler, systemd, LaunchAgent)
- 📊 Achievement Scoring: AI-powered analysis ranks the impact and importance of your work
- 📝 Professional Summaries: Generates polished descriptions of your contributions
- 🔒 Privacy Options: Run locally with Ollama - no data leaves your machine
Perfect for developers who want to maintain an up-to-date record of their professional accomplishments without manual effort.
Installation
npm install -g @bragdoc/cliOptional: GitHub CLI
For enhanced GitHub integration (PRs, issues, remote commits), install the GitHub CLI:
# macOS
brew install gh
# Windows
winget install --id GitHub.cli
# Linux (Debian/Ubuntu)
sudo apt install ghThen authenticate:
gh auth loginWithout gh, the CLI uses the Git connector for local repository extraction only.
Quick Start
- Authenticate with bragdoc:
bragdoc login- Initialize your project:
bragdoc init- Extract achievements from commits:
bragdoc extractCommands
Authentication (auth)
Manage your bragdoc authentication.
# Login to bragdoc
bragdoc auth login # aliased as `login`
# Check authentication status
bragdoc auth status
# Logout from bragdoc
bragdoc auth logout # aliased as `logout`Project Management
Initialize and manage projects that bragdoc will track.
# Initialize a project (syncs with web app if authenticated)
cd /path/to/repo
bragdoc init
# You'll be prompted to:
# 1. Choose extraction schedule (no/hourly/daily/custom)
# 2. Automatic system installation (crontab/Task Scheduler)
# Or use the projects command (init is an alias for projects add)
bragdoc projects add [path]
# List configured projects (shows schedules)
bragdoc projects list
# Update project settings
bragdoc projects update [path] --name "New Name" --max-commits 200
# Update project schedule (automatically updates system scheduling)
bragdoc projects update [path] --schedule
# Remove a project
bragdoc projects remove [path]
# Enable/disable project tracking
bragdoc projects enable [path]
bragdoc projects disable [path]GitHub Source Configuration
When you have the GitHub CLI (gh) installed and authenticated, bragdoc init offers the option to use the GitHub connector for richer extraction:
cd /path/to/repo
bragdoc init
# Select "GitHub" when prompted for extraction method
# Configure which data types to extract:
# - Commits (default: yes)
# - Merged PRs (default: yes)
# - Closed issues (default: no)
# - File-level stats (default: yes)GitHub vs Git Connector:
| Feature | Git Connector | GitHub Connector | |---------|--------------|------------------| | Commits | Yes | Yes | | Pull Requests | No | Yes | | Issues | No | Yes | | Full diffs | Yes | No | | Local clone required | Yes | No | | Offline support | Yes | No |
Achievement Extraction (extract)
Extract achievements from configured sources (Git or GitHub).
# Extract from current project
bragdoc extract
# Extract from specific branch
bragdoc extract --branch main
# Limit number of commits
bragdoc extract --max-commits 50
# Dry run to preview what would be extracted
bragdoc extract --dry-runWIP Extraction (wip)
Extract uncommitted work-in-progress from your current project. This command analyzes git status and diffs to generate a summary of changes, but does not upload to the API.
# Extract WIP from current directory
bragdoc wip
# Extract with verbose logging
bragdoc wip --logNote: This command is useful for testing WIP extraction locally. For automated standup WIP extraction, use bragdoc standup wip.
Standup WIP Automation (standup)
Automatically extract achievements and work-in-progress summaries before your daily standup meetings. The CLI can extract from multiple projects and submit to your standup in one command.
Setup
First, create a standup in the web app at https://app.bragdoc.ai/standups (takes <30 seconds). Then enroll your projects:
# From within a project directory - enroll single project
cd /path/to/project
bragdoc standup enable
# From anywhere - enroll multiple projects
bragdoc standup enable
# You'll see a checkbox list to select multiple projectsWhen you enable a standup, the CLI will:
- Fetch your standups from the web app
- Let you select which standup to configure
- Automatically set up system scheduling (cron/Task Scheduler)
- Extract achievements and WIP 10 minutes before your standup time
Commands
# Enable standup WIP extraction
bragdoc standup enable
# Check standup configuration
bragdoc standup status
# Manually extract and submit WIP for all enrolled projects
bragdoc standup wip
# Manually extract for specific standup (if you have multiple)
bragdoc standup wip --id <standupId>
# Disable standup for current project
cd /path/to/project
bragdoc standup disableHow It Works
Automatic Mode (Scheduled):
- 10 minutes before your standup time, the CLI automatically:
- Extracts new achievements from git commits (all enrolled projects)
- Extracts work-in-progress summaries from uncommitted changes (all enrolled projects)
- Submits combined WIP to your standup in the web app
Manual Mode:
- Run
bragdoc standup wipanytime to extract and submit immediately - Useful for testing or ad-hoc updates
Multi-Project Support:
- Enroll multiple projects in a single standup
- WIP extraction runs concurrently across all projects
- Combined summary includes all projects with clear headers
Example Workflow
# 1. Set up your first project
cd ~/work/frontend-app
bragdoc init --name "Frontend App"
bragdoc standup enable
# Select your standup from the list
# 2. Add more projects to the same standup
cd ~/work/backend-api
bragdoc init --name "Backend API"
bragdoc standup enable
# Select the same standup
# 3. Check configuration
bragdoc standup status
# Shows:
# - Standup name and schedule
# - Number of enrolled projects
# - List of project names
# 4. Test manual extraction
bragdoc standup wip
# Extracts from both projects and submits to web appLLM Configuration (llm)
Manage your LLM provider configuration for achievement extraction.
# Show current LLM configuration
bragdoc llm show
# Configure or reconfigure LLM provider
bragdoc llm setThe llm set command provides an interactive wizard that guides you through:
- Selecting your LLM provider (OpenAI, Anthropic, Google, DeepSeek, Ollama, OpenAI-compatible)
- Entering your API key (if required)
- Choosing your model (with sensible defaults)
- Configuring base URL (for Ollama and OpenAI-compatible providers)
When to use:
- After initial installation to set up your preferred LLM provider
- To switch between providers (e.g., from cloud to local Ollama)
- To update API keys or model versions
- To check which provider is currently configured
Note: You don't need to run this command manually if you're going through the normal flow - bragdoc init, bragdoc extract, and other commands will prompt you to configure the LLM provider if needed.
Monitoring Your Schedules
Check your automatic extractions using platform-specific tools:
Linux/macOS
# View your scheduled extractions
crontab -l
# Check cron service is running
ps aux | grep cronWindows
# View your scheduled tasks
schtasks /query /tn BragDoc*
# Open Task Scheduler GUI for visual management
taskschd.mscCache Management (cache)
Manage the local commit cache to optimize performance.
# List cached commits
bragdoc cache list
bragdoc cache list --stats
# Clear cache
bragdoc cache clear # Clear current project's cache
bragdoc cache clear --all # Clear all cached data
bragdoc cache clear --project name # Clear specific project's cacheData Management (data)
Manage the local cache of companies, projects, and standups data. The CLI automatically caches this data to reduce API calls and improve performance.
# Fetch all data from API (force refresh)
bragdoc data fetch
# Clear all cached data
bragdoc data clearCache Timeout: By default, cached data is refreshed every 5 minutes. You can configure this in your config file using the dataCacheTimeout setting (in minutes).
The data cache is stored in ~/.bragdoc/cache/ and includes:
companies.yml- Your companiesprojects.yml- Your projectsstandups.yml- Your standupsmeta.yml- Cache metadata and timestamps
Extraction Detail Levels
BragDoc CLI supports configurable extraction detail levels to control how much data is collected from git commits. More detailed extraction provides the LLM with richer context for better achievement extraction, but uses more LLM tokens and takes longer to process.
Detail Levels
- minimal: Commit messages only (fastest, least context)
- standard: Messages + file statistics (recommended default)
- detailed: Messages + stats + limited code diffs
- comprehensive: Messages + stats + extensive code diffs (slowest, most context)
CLI Options
# Use a preset detail level
bragdoc extract --detail-level detailed
# Fine-grained control
bragdoc extract --include-stats # Add file statistics only
bragdoc extract --include-stats --include-diff # Add both stats and diffsConfiguration File
Set defaults in ~/.bragdoc/config.yml:
# Global default for all projects
settings:
defaultExtraction:
detailLevel: standard
# Project-specific configuration
projects:
- path: /home/user/my-project
extraction:
detailLevel: detailed
# Or fine-grained control:
includeStats: true
includeDiff: true
maxDiffLinesPerCommit: 800
excludeDiffPatterns:
- "*.lock"
- "dist/**"Performance Considerations
- minimal: Fastest, best for large commit batches
- standard: Good balance of speed and context (recommended)
- detailed: Slower, use for smaller batches or important projects
- comprehensive: Slowest, only for critical extractions or small batches
Diff extraction adds significant LLM context. Consider reducing --batch-size when using detailed or comprehensive levels.
Integration Tests
The CLI includes integration tests that verify extraction functionality across all detail levels.
Running Integration Tests
# From CLI directory
pnpm test:integration
# From project root
pnpm test:integrationUpdating Snapshots
When extraction output changes intentionally (e.g., formatting improvements), update snapshots:
UPDATE_SNAPSHOTS=1 ./tests/integration/cli-extraction.shReview the updated snapshots before committing to ensure changes are correct.
Non-Interactive Mode
The bragdoc init command supports non-interactive mode for testing and automation:
bragdoc init \
--name "My Project" \
--detail-level standard \
--no-schedule \
--skip-llm-config \
--skip-api-syncFlags:
--detail-level <level>: Set extraction detail level (minimal, standard, detailed, comprehensive)--no-schedule: Skip automatic extraction schedule setup--skip-llm-config: Skip LLM configuration (uses environment variables)--skip-api-sync: Skip API sync (creates local-only project)
See tests/integration/README.md for detailed documentation on the integration test system.
Configuration
The CLI stores configuration in ~/.bragdoc/config.yml:
- Authentication tokens
- Project settings and schedules
- LLM provider configuration
- Commit cache locations
- API configuration
LLM Provider Configuration
The CLI uses AI to analyze your commits and extract achievements. It supports multiple LLM providers, giving you flexibility to choose based on cost, performance, or privacy requirements.
Supported Providers
The CLI supports the following LLM providers:
OpenAI (GPT-4, GPT-4o)
- Requires: API key from https://platform.openai.com/api-keys
- Default model:
gpt-4o
Anthropic (Claude)
- Requires: API key from https://console.anthropic.com/settings/keys
- Default model:
claude-3-5-sonnet-20241022
Google (Gemini)
- Requires: API key from https://aistudio.google.com/app/apikey
- Default model:
gemini-1.5-pro
DeepSeek
- Requires: API key from https://platform.deepseek.com/api_keys
- Default model:
deepseek-chat
Ollama (Local LLMs)
- Requires: Ollama installed locally (https://ollama.com)
- Models:
llama3.2,qwen2.5-coder, etc. - No API key needed - runs entirely on your machine
OpenAI-Compatible (LM Studio, LocalAI, etc.)
- Requires: Base URL and model name
- Example: LM Studio at
http://localhost:1234/v1 - Optional API key depending on your setup
How Configuration Works
You can configure your LLM provider in two ways:
1. Interactive Setup (Recommended)
Run bragdoc llm set to launch an interactive wizard that guides you through provider selection and configuration.
The CLI will also automatically prompt you to configure an LLM provider when you:
- Run
bragdoc initto add a new project - Run
bragdoc extractwithout an LLM configured - Run
bragdoc projects update --scheduleto set up automatic extraction - Run
bragdoc standup enableto enable standup WIP extraction
You'll be guided through an interactive setup that asks for:
- Which provider you want to use
- API key (if required)
- Model name (with sensible defaults)
- Base URL (for Ollama and OpenAI-compatible providers)
The configuration is saved to ~/.bragdoc/config.yml with secure file permissions (0600).
2. Manual Configuration
You can manually edit ~/.bragdoc/config.yml to configure your LLM provider. Here are examples for each provider:
OpenAI:
llm:
provider: openai
openai:
apiKey: sk-your-api-key-here
model: gpt-4o
baseURL: https://api.openai.com/v1 # optionalAnthropic:
llm:
provider: anthropic
anthropic:
apiKey: sk-ant-your-api-key-here
model: claude-3-5-sonnet-20241022Google:
llm:
provider: google
google:
apiKey: your-google-api-key-here
model: gemini-1.5-proDeepSeek:
llm:
provider: deepseek
deepseek:
apiKey: your-deepseek-api-key-here
model: deepseek-chat
baseURL: https://api.deepseek.com/v1 # optionalOllama:
llm:
provider: ollama
ollama:
model: llama3.2
baseURL: http://localhost:11434/api # optional, defaults to thisOpenAI-Compatible:
llm:
provider: openai-compatible
openaiCompatible:
baseURL: http://localhost:1234/v1
model: your-model-name
apiKey: optional-api-key # only if your server requires itAutomated Workflow Example
Here's how to set up fully automated achievement tracking:
# 1. Install and authenticate
npm install -g @bragdoc/cli
bragdoc login
# 2. Initialize your projects with scheduling
cd ~/work/frontend-app
bragdoc init --name "Frontend App"
# Choose "Daily" → Enter "18:00" → Automatically installs to crontab
cd ~/work/backend-api
bragdoc init --name "Backend API"
# Choose "Hourly" → Enter "0" → Automatically updates crontab
cd ~/work/mobile-app
bragdoc init --name "Mobile App"
# Choose "Daily" → Enter "09:00" → Automatically updates crontab
# 3. Your achievements are now automatically extracted:
# - Frontend App: Daily at 6:00 PM
# - Backend API: Every hour on the hour
# - Mobile App: Daily at 9:00 AMBest Practices
Automatic Scheduling:
- Set daily extractions for active projects
- Use hourly for rapidly evolving projects
- Schedule during off-hours to avoid interruption
Project Organization:
- Add projects you actively contribute to
- Use meaningful project names
- Set appropriate max-commit limits (100-500)
Schedule Management:
- Use system-level scheduling for reliability
- Check system logs if extractions fail
- Re-run installation commands to update schedules
Cache Management:
- The cache prevents re-processing of commits
- Clear cache if you need to re-process commits
- Use
cache list --statsto monitor cache size
Error Handling
The CLI provides detailed error messages and logging:
- Authentication errors
- Project validation issues
- API communication problems
- Cache-related errors
Environment Variables
BRAGDOC_API_URL: Override the API endpointBRAGDOC_DEBUG: Enable debug logging
Troubleshooting
Authentication Issues
- Ensure you're logged in:
bragdoc auth status - Try logging out and back in
- Check your internet connection
- Ensure you're logged in:
Project Issues
- Verify project path exists
- Ensure project has a remote URL
- Check project permissions
Extraction Issues
- Verify project is enabled:
bragdoc projects list - Check max-commits setting
- Try clearing the cache
- Verify project is enabled:
Scheduling Issues
- Linux/macOS: Check crontab with
crontab -l - Windows: Check tasks with
schtasks /query /tn BragDoc* - Verify system scheduling permissions
- Check extraction logs in system scheduler
- Check that system scheduling was properly set up automatically
- Linux/macOS: Check crontab with
System Integration Issues
- Windows: Run Command Prompt as Administrator for task creation
- macOS: Check LaunchAgent with
launchctl list | grep bragdoc - Linux: Verify systemd user services are enabled
GitHub Connector Issues
"GitHub CLI (gh) is not installed" error:
- Install gh from https://cli.github.com/
- Verify installation:
gh --version
"GitHub CLI is not authenticated" error:
- Run
gh auth loginto authenticate - Follow the prompts to complete OAuth flow
- Verify with
gh auth status
- Run
"Cannot access repository" error:
- Verify the repository format is correct:
owner/repo - Check you have access to the repository:
gh repo view owner/repo - For private repos, ensure your gh token has appropriate scopes
- Verify the repository format is correct:
Rate limiting errors:
- GitHub API has rate limits; wait and try again
- Consider reducing extraction frequency
- Check your rate limit:
gh api rate_limit
LLM Configuration Issues
"LLM provider is not configured" error:
- Run
bragdoc llm setto configure your LLM provider - Or run
bragdoc initto trigger the interactive LLM setup - Or manually edit
~/.bragdoc/config.ymlto add your LLM configuration - Verify your API key is correct and has sufficient credits/quota
- Run
Ollama "Not Found" (404) errors:
- Ensure Ollama is running:
ollama serve - Verify the baseURL includes
/api:http://localhost:11434/api - Check that the model is pulled:
ollama pull llama3.2
- Ensure Ollama is running:
Scheduled extractions not using configured LLM:
- Ensure LLM config is properly set in
~/.bragdoc/config.yml - Run
bragdoc extractmanually first to verify LLM configuration works
- Ensure LLM config is properly set in
API rate limiting or quota errors:
- Check your API provider's dashboard for usage limits
- Consider switching to a different provider or local Ollama
- Reduce extraction frequency in your schedule
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
MIT License - see LICENSE for details.
