codeguard-testgen
v1.0.21
Published
AI-powered unit test generator with AST analysis for TypeScript/JavaScript projects
Downloads
734
Maintainers
Readme
CodeGuard
AI-powered code review and unit test generator for TypeScript/JavaScript projects.
CodeGuard combines Abstract Syntax Tree analysis with large language models (Claude, OpenAI, or Gemini) to automatically review code quality and generate comprehensive test suites — either on demand or as part of your CI/CD pipeline.
Table of Contents
- Documentation
- Installation
- Quick Start
- Commands
- Features
- How It Works
- Configuration Reference
- Output Files
- Programmatic API
- CI/CD Integration
- Supported AI Providers
- Troubleshooting
- Requirements
- Contributing
- License
Documentation
| Document | Contents |
|----------|---------|
| This file | Quick-start, commands, and feature overview |
| docs/CONFIGURATION.md | Complete codeguard.json reference |
| docs/API.md | Exported functions, types, and interfaces |
| docs/ARCHITECTURE.md | System design, module map, data-flow diagrams |
| docs/CONTRIBUTING.md | Development setup and contribution guide |
Installation
Global (recommended):
# npm
npm install -g codeguard-testgen
# yarn
yarn global add codeguard-testgenLocal (project dev dependency):
# npm
npm install --save-dev codeguard-testgen
# yarn
yarn add --dev codeguard-testgenRequirements: Node.js >= 16, and @babel/parser + @babel/traverse in the target project:
# npm
npm install --save-dev @babel/parser @babel/traverse
# yarn
yarn add --dev @babel/parser @babel/traverseQuick Start
1. Create codeguard.json in your project root
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-..."
},
"testEnv": "vitest",
"testDir": "src/tests"
}See docs/CONFIGURATION.md for all options.
2. Stage your changes and run
git add src/services/user.service.ts
testgen autoCodeGuard will review the changed code and generate tests for modified functions. Results are saved to reviews/code_review.md and your testDir.
Commands
| Command | Description |
|---------|-------------|
| testgen auto | Code review and test generation for staged changes |
| testgen review | Code review only |
| testgen test | Test generation only |
| testgen doc | API documentation generation (in development) |
| testgen | Interactive mode — choose mode and files manually |
Both testgen and codeguard are registered as CLI entry points.
Features
Code Review (testgen review)
- Multi-step review pipeline: code quality, potential bugs, performance, security
- Each step is driven by a customisable markdown ruleset file in
codeguard-ruleset/ - Steps run in parallel by default (configurable via
reviewExecutionMode) - Output:
reviews/code_review.mdwith severity-tagged findings - Built-in rulesets cover:
- Code Quality — naming, complexity, dead code, duplication
- Potential Bugs — null/undefined access, off-by-one errors, async issues, race conditions, test spy/mock ordering anti-patterns
- Performance — algorithmic complexity, unnecessary re-renders, N+1 queries
- Security — injection, secrets in code, insecure dependencies
Test Generation (testgen test)
- Detects changed files and modified exported functions via git diff
- Uses AST analysis to understand function signatures, types, and dependencies
- Generates tests with proper mocks, edge cases, and error scenarios
- Supports Vitest and Jest
- Supports npm and yarn as the package manager for running tests
- Iterative AI loop automatically fixes import errors, missing mocks, and type issues
- TypeScript type-checking (
tsc --noEmit) runs after each write to catch type errors early - Preserves existing tests for unchanged functions
Unified Auto Mode (testgen auto)
Runs review and test generation sequentially in one command. Suitable for pre-commit hooks and CI/CD pipelines.
Interactive Mode (testgen)
Guides you through selecting files and functions manually. Supports:
- File-wise: generate tests for all exported functions in a file
- Folder-wise: batch process every file in a directory
- Function-wise: target specific functions with optional periodic validation
Codebase Indexing
On the first interactive run you will be offered the option to build a codebase index (stored in .codeguard-cache/index.json). The index caches AST analysis results, providing roughly 100× speedup on subsequent runs.
How It Works
Test Generation
1. git diff → identify changed source files
2. AI analysis → identify which exported functions changed
3. For each changed function:
a. Build prompt: function AST + imports + type definitions
b. AI conversation loop (max 30 iterations)
AI calls tools: get_function_ast, get_imports_ast, find_file,
upsert_function_tests, run_tests, run_tsc,
search_replace_block, insert_at_position,
search_codebase, get_file_preamble
c. Write test file → run tsc → run tests → parse failures
d. If tests fail: AI fixes and loops again
4. Report resultsCode Review
1. git diff → identify changed source files
2. Load enabled review steps from codeguard.json
3. For each step (in parallel or sequential):
a. Read ruleset markdown from codeguard-ruleset/
b. AI conversation loop with AST tools
c. Write findings to temporary markdown file
4. Merge all step outputs → reviews/code_review.mdAI Tool System
CodeGuard does not allow the AI to read or write files directly. All file access is mediated through typed tools that the host process executes, providing auditability and safe error handling.
| Tool | Purpose |
|------|---------|
| read_file | Read full file content (with size guard) |
| read_file_lines | Read a specific line range of a file |
| analyze_file_ast | Parse file structure — functions, classes, exports |
| get_function_ast | Full AST detail for a single function |
| get_imports_ast | Extract all import statements |
| get_type_definitions | Extract TypeScript interfaces and type aliases |
| get_file_preamble | Extract imports, vi.mock calls, and setup blocks |
| get_class_methods | Extract all methods from a class |
| upsert_function_tests | Write or update a function's describe block |
| search_replace_block | Fuzzy-match search-and-replace inside a file |
| insert_at_position | Insert content at beginning, end, after imports, etc. |
| run_tests | Run Vitest/Jest for a test file |
| run_tsc | TypeScript type-check a test file (tsc --noEmit) |
| find_file | Locate a file by name anywhere in the repo |
| list_directory | List files in a directory |
| calculate_relative_path | Compute correct relative import paths |
| resolve_import_path | Resolve a relative import to an absolute path |
| search_codebase | Grep the entire codebase for a pattern |
| write_review | Write review findings to a markdown file |
See docs/API.md — Tool Definitions for full schemas.
Configuration Reference
Create a codeguard.json file in the root of the project you want to analyse.
Minimal example
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-..."
}
}Full example
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-...",
"openai": "sk-...",
"gemini": "AI..."
},
"models": {
"claude": "claude-sonnet-4-5-20250929"
},
"testEnv": "vitest",
"packageManager": "yarn",
"testDir": "src/tests",
"sourceRoot": "src",
"reviewExecutionMode": "parallel",
"reviewSteps": [
{ "id": "code-quality", "name": "Code Quality", "category": "quality", "type": "ai", "enabled": true, "ruleset": "code-quality.md" },
{ "id": "potential-bugs", "name": "Potential Bugs", "category": "bugs", "type": "ai", "enabled": true, "ruleset": "potential-bugs.md" },
{ "id": "performance", "name": "Performance Issues", "category": "performance", "type": "ai", "enabled": true, "ruleset": "performance.md" },
{ "id": "security", "name": "Security Vulnerabilities", "category": "security", "type": "ai", "enabled": true, "ruleset": "security.md" }
]
}All options
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| aiProvider | string | — | Required. "claude", "openai", or "gemini" |
| apiKeys | object | — | Required. API keys for the chosen provider |
| models | object | see below | Override default model per provider |
| testEnv | string | "jest" | "vitest" or "jest" |
| packageManager | string | "npm" | "npm" or "yarn" — used when running tests |
| testDir | string | "src/tests" | Output directory for generated test files |
| sourceRoot | string | "src" | Source root; used to mirror directory structure in testDir |
| extensions | string[] | [".ts",".tsx",".js",".jsx"] | File extensions to process |
| excludeDirs | string[] | ["node_modules","dist",...] | Directories to skip |
| reviewExecutionMode | string | "parallel" | "parallel" or "sequential" |
| reviewSteps | ReviewStep[] | 4 built-in steps | Array of review step objects |
| validationInterval | number | undefined | undefined = no periodic validation, 0 = validate at end only, N = validate every N functions |
Full reference: docs/CONFIGURATION.md
Output Files
| Path | Contents |
|------|---------|
| reviews/code_review.md | Merged review report from all enabled steps |
| <testDir>/**/*.test.ts | Generated/updated test files |
| .codeguard-cache/index.json | Optional codebase index (auto-managed) |
Programmatic API
The package exports its core functions for use as a library:
import {
generateTests,
generateTestsForFolder,
generateTestsForFunction,
generateTestsForFunctions,
analyzeFileAST,
getFunctionAST,
getImportsAST,
getTypeDefinitions,
getClassMethods,
executeTool,
CodebaseIndexer,
TOOLS,
} from 'codeguard-testgen';
// Generate tests for a whole file
await generateTests('src/services/user.service.ts');
// Generate tests for specific functions only
await generateTestsForFunctions(
'src/api/auth.ts',
['login', 'logout', 'validateToken']
);
// Analyse a file's AST structure
const result = analyzeFileAST('src/utils/helpers.ts');
console.log(`Found ${result.analysis.functions.length} functions`);See docs/API.md for full type definitions and function signatures.
CI/CD Integration
GitHub Actions — auto mode
name: AI Code Review & Tests
on:
pull_request:
branches: [main]
jobs:
codeguard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/setup-node@v3
with:
node-version: '18'
- run: npm install
- run: npm install -g codeguard-testgen
- name: Review and generate tests
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: testgen auto
- uses: actions/upload-artifact@v3
with:
name: code-review
path: reviews/Pre-commit hook
#!/bin/bash
# .git/hooks/pre-commit
testgen auto
git add src/tests/ reviews/Supported AI Providers
| Provider | Config value | Default model |
|----------|-------------|---------------|
| Anthropic Claude | "claude" | claude-sonnet-4-5-20250929 |
| OpenAI | "openai" | gpt-5-mini |
| Google Gemini | "gemini" | gemini-2.0-flash-lite |
All providers share the same retry logic: exponential backoff with jitter, retrying on 429/5xx responses.
Troubleshooting
| Error | Cause | Fix |
|-------|-------|-----|
| codeguard.json not found | Missing config file | Create it in the project root |
| API key not configured | Missing key | Add to codeguard.json or set env var |
| Not a git repository | auto/test/review require git | Run git init |
| No changes detected | No staged/unstaged source changes | Check git status and git diff |
| No exported functions changed | Only internal functions modified | Ensure the changed function is exported |
| Missing required packages | Babel not installed | npm install --save-dev @babel/parser @babel/traverse |
More troubleshooting: docs/CONFIGURATION.md#troubleshooting
Requirements
- Node.js >= 16.0.0
- Git (for
auto,review,testcommands) @babel/parserand@babel/traversein the target project- Vitest or Jest (for running generated tests)
Contributing
See docs/CONTRIBUTING.md.
Issues
github.com/Ekansh-Gahlot/codeguard-testgen/issues
License
MIT — see LICENSE.
