codeguard-testgen
v1.0.15
Published
AI-powered unit test generator with AST analysis for TypeScript/JavaScript projects
Downloads
414
Maintainers
Readme
CodeGuard - AI Code Review & Test Generator
AI-powered code review and unit test generator with AST analysis for TypeScript/JavaScript projects. Automatically reviews code quality and generates comprehensive Jest tests using Claude, OpenAI, or Gemini.
⚡ NEW: Unified Auto Mode - Automatically review code quality AND generate tests for staged functions!
testgen auto # Reviews code + generates tests testgen review # Only code review testgen test # Only test generation testgen doc # Generate API documentation
Features
Code Review
- 🔍 AI Code Review: Comprehensive analysis of code quality, bugs, performance, and security
- 👨💻 Senior Developer Perspective: AI reviews code like an experienced developer
- 📊 Structured Reports: Markdown reviews with severity levels (Critical, High, Medium, Low)
- 🎯 Context-Aware: Uses AST analysis to understand full code context before reviewing
- 📁 Review History: All reviews saved to
reviews/directory for reference
Test Generation
- 🤖 AI-Powered: Uses Claude, OpenAI GPT, or Google Gemini to generate intelligent tests
- 🔍 AST Analysis: Deep code analysis using Babel parser for accurate test generation
- 📦 Codebase Indexing: Optional caching for 100x faster analysis on large projects
- 🎯 Multiple Modes: File-wise, folder-wise, function-wise, or auto test generation
- ✅ Smart Validation: Detects incomplete tests, missing assertions, and legitimate failures
- 🔄 Iterative Fixing: Automatically fixes import errors, missing mocks, and test issues
- 📋 TypeScript Support: Full support for TypeScript types, interfaces, and decorators
Unified Workflow
- ⚡ Auto Mode: Reviews code quality + generates tests for changed functions
- 🔄 Git Integration: Detects changes via git diff (staged and unstaged)
- 🚀 CI/CD Ready: Non-interactive modes perfect for automation
- 📚 Documentation Mode: AI-powered OpenAPI/Swagger documentation generation
Installation
Global Installation (Recommended)
npm install -g codeguard-testgenLocal Installation
npm install --save-dev codeguard-testgenConfiguration
Create a codeguard.json file in your project root:
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-api03-...",
"openai": "sk-...",
"gemini": "..."
},
"models": {
"claude": "claude-sonnet-4-20250514",
"openai": "gpt-4o-mini",
"gemini": "gemini-2.0-flash-exp"
},
"testEnv": "vitest/jest",
"testDir": "src/tests",
"excludeDirs": ["node_modules", "dist", "build", ".git", "coverage"],
"validationInterval": 5,
"reviewExecutionMode": "parallel",
"reviewSteps": [
{
"id": "code-quality",
"name": "Code Quality",
"category": "quality",
"type": "ai",
"enabled": true,
"ruleset": "code-quality.md"
},
{
"id": "security",
"name": "Security",
"category": "security",
"type": "ai",
"enabled": true,
"ruleset": "security.md"
}
]
}Configuration Options
| Option | Required | Description |
|--------|----------|-------------|
| aiProvider | Yes | AI provider to use: claude, openai, or gemini |
| apiKeys | Yes | API keys for the AI providers |
| models | No | Custom model names (uses defaults if not specified) |
| testDir | No | Directory for test files (default: src/tests) |
| extensions | No | File extensions to process (default: .ts, .tsx, .js, .jsx) |
| excludeDirs | No | Directories to exclude from scanning |
| validationInterval | No | Validation frequency in function-wise mode: undefined = no validation, 0 = only at end, N = validate every N functions |
| docsDir | No | Directory for generated documentation (default: docs) |
| docFormat | No | Documentation format: json or yaml (default: json) |
| docTitle | No | API documentation title (default: from package.json name) |
| docVersion | No | API version (default: from package.json version) |
| includeGenericFunctions | No | Include non-API functions in documentation (default: true) |
| repoDoc | No | Document entire repository (true) or only staged changes (false, default) |
| reviewSteps | No | Array of review steps with custom rulesets (see below) |
| reviewExecutionMode | No | How to execute review steps: parallel or sequential (default: parallel) |
Configurable Review Steps
Configure custom review steps with rulesets defined in markdown files. Each ruleset is stored in the codeguard-ruleset/ folder at your project root.
{
"reviewExecutionMode": "parallel",
"reviewSteps": [
{
"id": "code-quality",
"name": "Code Quality",
"category": "quality",
"type": "ai",
"enabled": true,
"ruleset": "code-quality.md"
},
{
"id": "security",
"name": "Security Vulnerabilities",
"category": "security",
"type": "ai",
"enabled": true,
"ruleset": "security.md"
}
]
}Review Step Options:
| Option | Required | Description |
|--------|----------|-------------|
| id | Yes | Unique identifier for the step |
| name | Yes | Display name for the review step |
| category | Yes | Category of the review (e.g., quality, security, performance) |
| type | Yes | Type of review (currently only ai supported) |
| enabled | Yes | Whether this step is active (true or false) |
| ruleset | Yes | Filename of markdown ruleset in codeguard-ruleset/ folder |
Ruleset Files:
Rulesets are markdown files stored in codeguard-ruleset/ at your project root:
your-project/
├── codeguard.json
├── codeguard-ruleset/
│ ├── code-quality.md
│ ├── security.md
│ ├── performance.md
│ └── custom-rules.md
└── src/Each ruleset file can contain:
- Detailed review criteria
- Specific rules and guidelines
- Examples and code snippets
- Severity guidelines
- OWASP references (for security)
- Best practices documentation
Example Ruleset File (codeguard-ruleset/code-quality.md):
# Code Quality Review Ruleset
## Review Criteria
### 1. Naming Conventions
- Functions: Use clear, descriptive names
- Variables: Use meaningful names
- Boolean variables: Prefix with is, has, should
### 2. Code Complexity
- Functions should be concise (< 50 lines)
- Cyclomatic complexity should be low (< 10)
- Avoid deeply nested conditionals
...Execution Modes:
parallel(default): All enabled review steps run simultaneously for faster completionsequential: Steps run one after another in the order defined
Default Review Steps:
If you don't specify reviewSteps in your config, CodeGuard uses these default steps:
- ✅ Code Quality (
code-quality.md) - Naming, complexity, readability, best practices - ✅ Potential Bugs (
potential-bugs.md) - Logic errors, edge cases, type issues, async problems - ✅ Performance (
performance.md) - Algorithm efficiency, unnecessary computations, memory leaks - ✅ Security (
security.md) - Input validation, injection risks, OWASP vulnerabilities
Included Ruleset Files:
CodeGuard comes with comprehensive default rulesets in codeguard-ruleset/:
code-quality.md- 8 categories including naming, complexity, patterns, error handlingpotential-bugs.md- 8 categories covering logic errors, edge cases, async issuesperformance.md- 8 categories for algorithms, caching, data structures, optimizationssecurity.md- OWASP Top 10 coverage with specific checks and references
You can customize these files or create your own rulesets for project-specific requirements.
Output Format:
Reviews are organized by step in the final markdown file:
# Code Review
## Summary
[Overall assessment]
## Files Changed
[List of files]
## Code Quality
[Findings from code quality step]
## Security Vulnerabilities
[Findings from security step]
## Performance Issues
[Findings from performance step]
## Conclusion
[Final assessment]See codeguard.example.json for a complete configuration example with additional review steps like Accessibility and Documentation Quality.
Validation Interval
The validationInterval option controls when the full test suite is validated during function-wise test generation:
undefined(default): No periodic validation - fastest, tests each function independently0: Validate only at the end after all functions are processedN(number): Validate every N functions to catch integration issues early
Example use cases:
{
"validationInterval": undefined // Fast - no validation checkpoints
}{
"validationInterval": 5 // Validate after every 5 functions
}{
"validationInterval": 0 // Validate only once at the end
}Recommendation: Use 5 or 10 for large files with many functions to catch integration issues early. Use undefined for fastest processing.
Getting API Keys
- Claude (Anthropic): https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
- Gemini (Google): https://makersuite.google.com/app/apikey
Quick Reference
| Command | Description | Use Case |
|---------|-------------|----------|
| testgen auto | Review code quality + generate tests | Complete workflow, CI/CD |
| testgen review | Only review code changes | Code review, quality checks |
| testgen test | Only generate tests for changes | Testing workflow |
| testgen | Interactive mode - choose mode manually | Exploratory testing |
| Mode 1: File-wise | Generate tests for entire file | New files, comprehensive coverage |
| Mode 2: Folder-wise | Generate tests for all files in folder | Batch processing |
| Mode 3: Function-wise | Generate tests for specific functions | Incremental testing |
Usage
Auto Mode - Complete Workflow (Recommended)
Automatically review code quality and generate tests for changed functions:
testgen autoWhat it does:
- Reviews changed code for quality, bugs, performance, and security issues
- Generates comprehensive tests for modified functions
- Saves review to
reviews/{filename}.review.md - Creates or updates test files
Example output:
🔍 Scanning git changes for review...
📝 Found changes in 1 file(s) to review
🔄 Reviewing: src/services/user.service.ts
📦 Changed functions: createUser, updateUser
✅ Review completed
📁 Reviews saved to: reviews/ directory
============================================================
🔍 Scanning git changes for testing...
📝 Found changes in 1 file(s)
🔄 Processing: src/services/user.service.ts
📦 Changed functions: createUser, updateUser
✅ Tests generated successfullyReview Only Mode
Get AI code review without generating tests:
testgen reviewWhat gets reviewed:
- 🎯 Code Quality: Naming, complexity, readability, best practices
- 🐛 Potential Bugs: Logic errors, edge cases, type mismatches, async issues
- ⚡ Performance: Inefficient algorithms, memory leaks, unnecessary computations
- 🔒 Security: Input validation, injection risks, authentication issues
Review output (reviews/{filename}.review.md):
# Code Review: user.service.ts
## Summary
Overall code quality is good with some areas for improvement...
## Findings
### 🔴 Critical Issues
#### [Security] Missing Input Validation
**Function**: `createUser`
**Issue**: Email parameter not validated before database insertion...
**Recommended Fix**:
...typescript
if (!email || !email.includes('@')) {
throw new Error('Invalid email');
}
...
### 🟡 Medium Priority Issues
#### [Performance] Inefficient Loop
...
## ✅ Positive Aspects
- Well-structured error handling
- Clear function naming
## 💡 General Recommendations
1. Add input validation for all public functions
2. Consider adding JSDoc commentsTest Generation Only Mode
Generate tests without code review:
testgen testHow it works:
- Reads both
git diff --stagedandgit diffto find all changes - Identifies which files have been modified
- Uses AI to detect which exported functions have changes
- Automatically generates or updates tests for those functions
- No user interaction required - perfect for automation!
Example workflows:
Complete workflow (review + test):
# Make changes to your code
vim src/services/user.service.ts
# Stage your changes
git add src/services/user.service.ts
# Review code quality and generate tests
testgen autoReview only:
# Get code review for staged changes
testgen review
# Check the review
cat reviews/user.service.review.mdTest generation only:
# Generate tests without review
testgen testDocumentation generation:
# Generate API documentation
testgen docOutput:
🧪 AI-Powered Unit Test Generator with AST Analysis
🤖 Auto Mode: Detecting changes via git diff
✅ Using OPENAI (gpt-4o-mini) with AST-powered analysis
🔍 Scanning git changes...
📝 Found changes in 2 file(s)
🔄 Processing: src/services/user.service.ts
📦 Changed functions: createUser, updateUser
✅ Tests generated successfully
============================================================
📊 Auto-Generation Summary
============================================================
✅ Successfully processed: 1 file(s)
📝 Functions tested: 2
============================================================Benefits:
- 🔍 Quality Assurance: Catch issues before they reach production
- ⚡ Fast: Only processes changed files
- 🎯 Targeted: Reviews and tests only modified functions
- 🔄 CI/CD Ready: Non-interactive, perfect for automation
- 🛡️ Safe: Preserves existing tests for unchanged functions
- 📊 Trackable: All reviews saved for historical reference
What files are processed:
- ✅ Source files with supported extensions (
.ts,.tsx,.js,.jsx) - ✅ Files with exported functions
- ❌ Test files (
.test.,.spec.,__tests__/,/tests/) - ❌ Files in
node_modules,dist,build, etc. - ❌ Non-source files (configs, markdown, etc.)
Interactive Mode
Simply run the command and follow the prompts:
testgenor
codeguardYou'll be guided through:
- Selecting test generation mode (file/folder/function-wise)
- Choosing files or functions to test
- Optional codebase indexing for faster processing
Test Generation Modes
1. File-wise Mode
Generate tests for a single file:
- Select from a list of source files
- Generates comprehensive tests for all exported functions
- Creates test file with proper structure and mocks
2. Folder-wise Mode
Generate tests for all files in a directory:
- Select a folder from your project
- Processes all matching files recursively
- Batch generates tests with progress tracking
3. Function-wise Mode
Generate tests for specific functions:
- Select a file
- Choose which functions to test
- Preserves existing tests for other functions
- Ideal for incremental test development
4. Auto Mode (Unified)
Review code quality and generate tests automatically:
- Analyzes git diff (staged and unstaged changes)
- AI reviews code for quality, bugs, performance, security
- Generates comprehensive review markdown files
- Creates tests for changed exported functions
- Non-interactive - perfect for CI/CD pipelines
- Use:
testgen auto
5. Review Mode
AI-powered code review only:
- Comprehensive analysis by senior-level AI reviewer
- Reviews code quality, potential bugs, performance issues, security vulnerabilities
- Uses AST tools to understand full context
- Generates structured markdown reports
- Use:
testgen review
6. Test Mode
Test generation only:
- Generates tests for changed functions
- Skips code review process
- Faster when you only need tests
- Use:
testgen test
7. Documentation Mode
AI-powered API documentation generation:
- Default: Documents only staged/changed functions (like review/test modes)
- Full Repo: Set
"repoDoc": trueto document entire codebase - Analyzes codebase using AST tools
- Auto-detects API endpoints (Express, NestJS, Fastify, Koa)
- Generates comprehensive OpenAPI 3.0 specification
- Documents both API routes and generic functions
- Smart merge with existing documentation
- Supports JSON and YAML formats
- Use:
testgen doc
Two modes:
1. Changed Files Only (Default) - "repoDoc": false or omitted
- Works like
testgen reviewandtestgen test - Only documents staged/changed functions
- Fast and targeted
- Perfect for incremental updates
- Requires git repository
2. Full Repository - "repoDoc": true
- Documents entire codebase
- Comprehensive documentation generation
- Useful for initial documentation or major updates
- No git requirement
What it documents:
- ✅ API Endpoints: All REST API routes with methods, paths, parameters
- ✅ Request/Response Schemas: Inferred from TypeScript types
- ✅ Authentication: Detects and documents auth requirements
- ✅ Error Responses: Documents error cases and status codes
- ✅ Generic Functions: Optional documentation for utility functions
- ✅ Usage Examples: AI-generated examples for each endpoint
Supported Frameworks:
- Express:
app.get(),router.post(), route methods - NestJS:
@Controller(),@Get(),@Post(), decorators - Fastify:
fastify.route(), route configurations - Koa:
router.get(), middleware patterns
Example usage:
# Document only changed/staged functions (default)
testgen doc
# Output:
# 📚 Documentation Mode: Generating API documentation
#
# 🔍 Scanning git changes for documentation...
#
# 📝 Found changes in 2 file(s)
#
# 🤖 Generating OpenAPI specification...
#
# ✅ Documentation generated successfully
#
# ============================================================
# 📊 Documentation Summary
# ============================================================
# ✅ API Endpoints documented: 5
# ✅ Generic functions documented: 8
# 📁 Output: docs/openapi.json
# ============================================================
# For full repository documentation, set in codeguard.json:
# {
# "repoDoc": true
# }Generated OpenAPI spec:
{
"openapi": "3.0.0",
"info": {
"title": "My API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "Get all users",
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { "$ref": "#/components/schemas/User" }
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"properties": {
"id": { "type": "string" },
"name": { "type": "string" },
"email": { "type": "string" }
}
}
}
}
}Smart merging: When existing documentation is found, CodeGuard intelligently merges:
- ✅ Preserves manually edited descriptions and summaries
- ✅ Updates schemas with latest types from code
- ✅ Adds new endpoints without removing manual changes
- ✅ Maintains custom examples and documentation
- ✅ Tracks generation metadata and timestamps
How It Works
Code Review Process
- Git Diff Analysis: Detects changed files and functions
- AST Analysis: Deep parse of code structure using Babel
- Context Understanding: AI uses tools to analyze:
- Function implementations
- Dependencies and imports
- Type definitions
- Related code context
- Multi-Aspect Review: Analyzes for:
- Code quality and best practices
- Potential bugs and edge cases
- Performance bottlenecks
- Security vulnerabilities
- Structured Report: Generates markdown with:
- Severity-based findings
- Code snippets and fixes
- Positive observations
- Actionable recommendations
Test Generation Process
- AST Analysis: Parses your code using Babel to understand structure
- Dependency Resolution: Analyzes imports and calculates correct paths
- AI Generation: Uses AI to generate comprehensive test cases
- Validation: Checks for completeness, assertions, and coverage
- Execution: Runs tests with Jest to verify correctness
- Iterative Fixing: Automatically fixes common issues like:
- Import path errors
- Missing mocks
- Database initialization errors
- Type mismatches
- Failure Detection: Distinguishes between test bugs and source code bugs
Documentation Generation Process
- File Scanning: Recursively scans all source files in the project
- AST Analysis: Parses each file using Babel to understand structure
- Endpoint Detection: AI identifies API routes across different frameworks:
- Express:
app.METHOD(),router.METHOD() - NestJS:
@Controller(),@Get(),@Post(), etc. - Fastify:
fastify.route(), route configurations - Koa:
router.METHOD(), middleware chains
- Express:
- Schema Inference: Extracts TypeScript types for request/response schemas
- AI Enhancement: AI generates:
- Meaningful descriptions for each endpoint
- Parameter documentation
- Response examples
- Error scenarios
- OpenAPI Generation: Builds complete OpenAPI 3.0 specification
- Smart Merge: Intelligently merges with existing documentation
- File Output: Writes to
docs/openapi.jsonor.yaml
Generated Test Features
The AI generates tests with:
- ✅ Proper imports and type definitions
- ✅ Jest mocks for dependencies
- ✅ Multiple test cases per function:
- Happy path scenarios
- Edge cases (null, undefined, empty arrays)
- Error conditions
- Async behavior testing
- ✅ Clear, descriptive test names
- ✅ Complete implementations (no placeholder comments)
- ✅ Real assertions with expect() statements
Advanced Features
CI/CD Integration
CodeGuard modes are designed for continuous integration workflows:
GitHub Actions - Complete Workflow (Review + Tests):
name: AI Code Review & Test Generation
on:
pull_request:
branches: [ main, develop ]
jobs:
review-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Fetch full history for git diff
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm install
- name: Install CodeGuard
run: npm install -g codeguard-testgen
- name: Review code and generate tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: testgen auto
- name: Upload review reports
uses: actions/upload-artifact@v3
with:
name: code-reviews
path: reviews/
- name: Commit generated tests and reviews
run: |
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add src/tests/ reviews/
git commit -m "🤖 AI code review + tests for changed functions" || echo "No changes"
git pushGitHub Actions - Review Only:
name: AI Code Review
on:
pull_request:
branches: [ main ]
jobs:
code-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install CodeGuard
run: npm install -g codeguard-testgen
- name: AI Code Review
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: testgen review
- name: Comment PR with review
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const reviews = fs.readdirSync('reviews/');
for (const review of reviews) {
const content = fs.readFileSync(`reviews/${review}`, 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `## AI Code Review: ${review}\n\n${content}`
});
}GitLab CI Example:
review-and-test:
stage: quality
script:
- npm install -g codeguard-testgen
- testgen auto # Review + tests
artifacts:
paths:
- reviews/
- src/tests/
only:
- merge_requests
review-only:
stage: quality
script:
- npm install -g codeguard-testgen
- testgen review
artifacts:
reports:
codequality: reviews/
only:
- merge_requestsPre-commit Hook:
#!/bin/bash
# .git/hooks/pre-commit
# Review code and generate tests for staged changes
testgen auto
# Add generated tests and reviews to commit
git add src/tests/ reviews/Pre-push Hook (Review Only):
#!/bin/bash
# .git/hooks/pre-push
# Quick code review before pushing
testgen review
# Show review summary
echo "📊 Code Review Complete - Check reviews/ directory"Codebase Indexing
On first run, you'll be prompted to enable codebase indexing:
Enable codebase indexing? (y/n)Benefits:
- 100x+ faster analysis on subsequent runs
- Instant dependency lookups
- Cached AST parsing
- Automatic update detection
The index is stored in .codeguard-cache/ and automatically updates when files change.
Legitimate Failure Detection
The tool distinguishes between:
Fixable Test Issues (automatically fixed):
- Wrong import paths
- Missing mocks
- Incorrect assertions
- TypeScript errors
Legitimate Source Code Bugs (reported, not fixed):
- Function returns wrong type
- Missing null checks
- Logic errors
- Unhandled edge cases
When legitimate bugs are found, they're reported with details for you to fix in the source code.
Examples
Example 1: Complete Workflow (Auto Mode)
Step 1: Make changes to a function
// src/services/user.service.ts
export const createUser = async (name: string, email: string) => {
// Added email validation
if (!email.includes('@')) {
throw new Error('Invalid email');
}
return await db.users.create({ name, email });
};
export const deleteUser = async (id: string) => {
return await db.users.delete(id);
};Step 2: Stage changes and run auto mode
git add src/services/user.service.ts
testgen autoOutput:
🔍 Scanning git changes for review...
📝 Found changes in 1 file(s)
🔄 Reviewing: src/services/user.service.ts
📦 Changed functions: createUser
✅ Review completed
============================================================
🔍 Scanning git changes for testing...
📝 Found changes in 1 file(s)
🔄 Processing: src/services/user.service.ts
📦 Changed functions: createUser
✅ Tests generated successfullyResults:
reviews/user.service.review.mdcreated with code quality analysis- Only
createUsergets new tests,deleteUsertests remain unchanged!
Review excerpt:
### 🟡 Medium Priority Issues
#### [Code Quality] Weak Email Validation
**Function**: `createUser`
**Issue**: Email validation only checks for '@' symbol, not comprehensive
**Recommended Fix**:
```typescript
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!emailRegex.test(email)) {
throw new Error('Invalid email format');
}
### Example 2: Testing a User Service
```typescript
// src/services/user.service.ts
export class UserService {
async getUser(id: string): Promise<User> {
return await this.db.findUser(id);
}
}Generated test:
// src/tests/services/user.service.test.ts
import { UserService } from '../../services/user.service';
jest.mock('../../database');
describe('UserService', () => {
describe('getUser', () => {
test('should return user when id exists', async () => {
const mockUser = { id: '123', name: 'John' };
const service = new UserService();
service.db.findUser = jest.fn().mockResolvedValue(mockUser);
const result = await service.getUser('123');
expect(result).toEqual(mockUser);
expect(service.db.findUser).toHaveBeenCalledWith('123');
});
test('should handle null id', async () => {
const service = new UserService();
await expect(service.getUser(null)).rejects.toThrow();
});
});
});Troubleshooting
Command Mode Issues
"Not a git repository"
The auto, test, and review commands require git to detect changes. Initialize git in your project:
git init"No changes detected in source files"
This means:
- No staged or unstaged changes exist
- Only test files were modified (test files are excluded)
- Changes are in non-source files
Check your changes:
git status
git diffReview/Test mode not working
Make sure you're using the correct command:
testgen auto # Review + tests
testgen review # Only review
testgen test # Only tests"No exported functions changed"
Possible causes:
- AI model misconfigured: Check your
codeguard.jsonhas a valid model name:{ "models": { "openai": "gpt-4o-mini" // ✅ Correct // NOT "gpt-5-mini" ❌ } } - Only internal functions changed: Auto mode only generates tests for exported functions
- File has no exported functions: Make sure functions are exported:
export const myFunction = () => { } // ✅ Will be tested const internalFunc = () => { } // ❌ Will be skipped
Debugging Auto Mode
Enable detailed logging by checking the console output:
testgen autoLook for:
📦 Found X exported function(s): ...- Shows detected functions🤖 AI response: ...- Shows what AI detected📊 AST Analysis result: ...- Shows file parsing results
"Configuration Error: codeguard.json not found"
Create a codeguard.json file in your project root. See Configuration section above.
"API key not configured"
Ensure your codeguard.json has the correct API key for your selected provider:
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-..."
}
}Tests fail with import errors
The tool automatically detects and fixes import path errors. If issues persist:
- Check that all dependencies are installed
- Verify your project structure matches expected paths
- Ensure TypeScript is configured correctly
"Missing required packages"
Install Babel dependencies:
npm install --save-dev @babel/parser @babel/traverseProgrammatic Usage
You can also use CodeGuard as a library:
import { generateTests, analyzeFileAST } from 'codeguard-testgen';
// Generate tests for a file
await generateTests('src/services/user.service.ts');
// Analyze a file's AST
const analysis = analyzeFileAST('src/utils/helpers.ts');
console.log(analysis.functions);Project Structure
After installation, your project will have:
your-project/
├── codeguard.json # Configuration file
├── src/
│ ├── services/
│ │ └── user.service.ts
│ └── tests/ # Generated tests
│ └── services/
│ └── user.service.test.ts
├── reviews/ # AI code reviews
│ └── user.service.review.md
└── .codeguard-cache/ # Optional index cacheRequirements
- Node.js >= 16.0.0
- Jest (for running generated tests)
- TypeScript (for TypeScript projects)
License
MIT
Contributing
Contributions welcome! Please open an issue or submit a pull request.
Support
For issues, questions, or feature requests, please open an issue on GitHub.
