cygen
v5.0.2
Published
AI-powered Cypress test generator
Downloads
133
Readme
CyGen - AI-Powered Cypress Test Generator
[Under Construction]
CyGen is an intelligent test generator that automatically creates Cypress tests for your JavaScript/TypeScript API endpoints. It uses AI agents to generate comprehensive test cases.
Features
- Automatic test generation for JavaScript/TypeScript API endpoints
- AI-powered test case generation using agents
- Comprehensive test scenarios including:
- Happy path tests
- Negative test cases
- Edge cases
- Error handling
- Real-time file watching
- Customizable output directory
- Support for modern JavaScript features
Installation
Global Installation (Recommended)
# Install globally
npm install -g cygen
# Verify installation
cygen --helpLocal Installation
# Install in your project
npm install cygen
# Run using npx
npx cygen --helpUsage
Command Line Interface
CyGen provides two main commands:
- Watch Mode - Continuously watch for file changes:
# Basic usage with Ollama
cygen watch --use-ai
# With OpenAI
cygen watch --use-ai --llm openai --model gpt-4 --api-key YOUR_OPENAI_KEY
# With web search enabled
cygen watch --use-ai --web-search --llm ollama --model llama3.1- Test Mode - Generate and optionally run tests for specific files:
# Generate tests with OpenAI
cygen test --files ./src/api/users.js --use-ai --llm openai --model gpt-3.5-turbo --api-key YOUR_OPENAI_KEY
# With custom Ollama server
cygen test --files ./src/api/*.js --use-ai --llm ollama --model mistral --base-url http://localhost:11434
# Generate and run tests
cygen test --files ./src/api/*.js --use-ai --web-searchCLI Options
Watch Command Options
| Option | Alias | Description | Default |
|--------|-------|-------------|---------|
| --watch-dir | -w | Directory to watch for file changes | Current directory |
| --output-dir | -o | Directory to output generated tests | cypress/integration/generated |
| --use-ai | | Enable AI-powered test generation | false |
| --web-search | | Enable web search for test generation | false |
| --llm | | LLM provider (ollama or openai) | ollama |
| --model | | Model name for the selected LLM | llama3.1 |
| --api-key | | API key for the LLM provider | |
| --base-url | | Base URL for the LLM service | http://localhost:11434 |
Test Command Options
| Option | Alias | Description | Default |
|--------|-------|-------------|---------|
| --files | -f | Files to generate tests for | (required) |
| --output-dir | -o | Directory to output generated tests | cypress/integration/generated |
| --use-ai | | Enable AI-powered test generation | false |
| --web-search | | Enable web search for test generation | false |
| --llm | | LLM provider (ollama or openai) | ollama |
| --model | | Model name for the selected LLM | llama3.1 |
| --api-key | | API key for the LLM provider | |
| --base-url | | Base URL for the LLM service | http://localhost:11434 |
| --run | | Run the generated tests with Cypress | false |
Programmatic Usage
You can also use CyGen programmatically in your code:
const { CyGen } = require('cygen');
// Using OpenAI
const cygen = new CyGen({
useAI: true,
aiOptions: {
llm: 'openai',
model: 'gpt-4',
apiKey: 'your-openai-key',
enableWebSearch: true
}
});
// Using Ollama
const cygen = new CyGen({
useAI: true,
aiOptions: {
llm: 'ollama',
model: 'mistral',
baseUrl: 'http://localhost:11434'
}
});
// Generate tests
await cygen.generateTestsForFile('./src/api/users.js');Configuration
The following options are available:
watchDir: Directory to watch for changes (default: current working directory)outputDir: Directory where test files will be generated (default: './cypress/integration/generated')useAI: Enable AI-powered test generation (default: false)aiOptions: AI configuration optionsllm: LLM provider (ollamaoropenai)model: Model name for the selected LLMapiKey: API key for the LLM providerbaseUrl: Base URL for the LLM serviceenableWebSearch: Enable web search for test generation
Supported File Types
- JavaScript/TypeScript API files
- Swagger/OpenAPI specification files (JSON/YAML)
Generated Tests
Tests are generated with comprehensive coverage including:
Happy Path Tests
- Successful API calls
- Valid request/response handling
- Expected data validation
Negative Tests
- Invalid input handling
- Error response validation
- Authentication/Authorization failures
- Rate limiting scenarios
Edge Cases
- Boundary value testing
- Empty/null input handling
- Large payload handling
- Timeout scenarios
Example generated test:
describe('GET /api/users', () => {
beforeEach(() => {
cy.intercept('GET', '/api/users').as('apiRequest');
});
it('should return a list of users', () => {
cy.request({
method: 'GET',
url: '/api/users',
failOnStatusCode: false
}).then((response) => {
expect(response.status).to.equal(200);
expect(response.body).to.be.an('array');
expect(response.body[0]).to.have.property('id');
expect(response.body[0]).to.have.property('name');
});
});
it('should handle invalid requests', () => {
cy.request({
method: 'GET',
url: '/api/users/invalid',
failOnStatusCode: false
}).then((response) => {
expect(response.status).to.equal(404);
expect(response.body).to.have.property('error');
});
});
});Supported LLMs
Ollama
- Default models:
llama3.1,mistral,llama2 - Requires local Ollama server (default:
http://localhost:11434)
OpenAI
- Models:
gpt-4,gpt-3.5-turbo - Requires API key
