qaai
v1.0.0
Published
AI-powered API test generator that creates comprehensive Jest tests from OpenAPI specifications
Maintainers
Readme
QAAI - AI-Assisted API Test Automation
Automatically generate comprehensive API tests from your OpenAPI specifications using real AI (GPT-4) or deterministic rules.
🎯 What Problem Does This Solve?
The Problem:
Writing API tests is time-consuming, repetitive, and error-prone. Developers often:
- Spend hours writing boilerplate test code for each API endpoint
- Miss edge cases and error scenarios
- Struggle to keep tests updated when APIs change
- Need to understand OpenAPI specs and testing frameworks deeply
The Solution:
QAAI reads your OpenAPI specification (the document that describes your API) and automatically generates complete, production-ready test files. You go from an API spec to comprehensive tests in seconds, not hours.
Example:
- Before QAAI: Write 50+ lines of test code for each endpoint manually
- With QAAI: Run
npx qaai generate→ Get complete test files automatically
✨ Key Features
- 🤖 Real AI-Powered: Uses OpenAI GPT-4 to understand your API and generate intelligent, context-aware tests
- 🚀 Instant Test Generation: Convert OpenAPI specs to Jest tests in seconds
- 🎯 Comprehensive Coverage: Automatically generates happy path, error cases, and edge cases
- 💰 Flexible Options:
- Use AI mode (costs ~$0.21 per 20 endpoints)
- Use free deterministic mode (no API key needed)
- 📦 Zero Configuration: Works out of the box with sensible defaults
- ✅ Production-Ready: Includes retry logic, validation, and error handling
- 🔧 Framework Support: Generates Jest tests (more frameworks coming soon)
🚀 Quick Start (5 Minutes)
Prerequisites
You need:
- Node.js installed (version 16 or higher) - Download here
- A Node.js project with
package.jsonfile - An OpenAPI specification file OR Swagger URL
Step 1: Install QAAI
In your project directory, run:
npm install --save-dev qaaiWhat this does: Installs QAAI as a development dependency in your project.
Step 2: Get an OpenAI API Key (Optional - Can use free mode without it)
QAAI offers two modes:
- AI Mode (Recommended): Uses GPT-4 for intelligent tests (~$0.21 per 20 endpoints)
- Free Mode: Uses rule-based generation (no cost, no API key needed)
To use AI mode, get an API key:
- Go to platform.openai.com/api-keys
- Sign up or log in
- Create a new API key
- Copy the key (starts with
sk-...)
Option A: Add to environment variable (More Secure)
export OPENAI_API_KEY=sk-your-actual-key-hereOption B: Add to config file (Easier)
Add it to qaai.config.json later in step 3.
No API key? QAAI will automatically use free deterministic generation!
Step 3: Initialize Configuration
npx qaai initWhat this does: Creates a qaai.config.json file in your project with default settings.
The config file looks like this:
{
"outputDir": "tests/generated/api",
"testFramework": "jest",
"baseUrlEnvVar": "QA_BASE_URL",
"llm": {
"provider": "openai",
"model": "gpt-4-turbo-preview",
"temperature": 0.3
}
}Adding API Key to Config (Optional):
If you prefer, add your API key directly to config (less secure but easier):
{
"outputDir": "tests/generated/api",
"testFramework": "jest",
"baseUrlEnvVar": "QA_BASE_URL",
"llm": {
"provider": "openai",
"apiKey": "sk-your-key-here",
"model": "gpt-4-turbo-preview",
"temperature": 0.3
}
}⚠️ Security Warning: If you add apiKey to config, DO NOT commit it to git!
Step 4: Add Your OpenAPI Specification
You have three options:
Option A: Local file in project root
Put your OpenAPI file as openapi.yaml in your project root
Option B: Local file in custom location
Tell QAAI where your file is by editing qaai.config.json:
{
"openapiPath": "path/to/your-api-spec.yaml",
"outputDir": "tests/generated/api",
"testFramework": "jest"
}Option C: Swagger URL (NEW!)
Use your Swagger endpoint directly:
{
"openapiPath": "https://api.example.com/swagger.json",
"outputDir": "tests/generated/api",
"testFramework": "jest"
}Common Swagger URLs:
https://api.example.com/swagger.jsonhttps://api.example.com/v2/api-docshttps://api.example.com/swagger/v1/swagger.json
Step 5: Generate Tests
npx qaai generateWhat happens:
- ✅ QAAI finds and reads your OpenAPI specification (local file or Swagger URL)
- 🤖 If you set API key, GPT-4 analyzes your API and generates intelligent tests
- 💡 If no API key, uses free deterministic generation
- 📝 Test files are written to
tests/generated/api/ - ✨ You now have complete test coverage!
Example output (with AI):
📄 Discovering OpenAPI specification...
Found: openapi.yaml
🔧 Parsing and normalizing OpenAPI spec...
Endpoints found: 5
🤖 Using LLM-powered test generation
✨ Generated 15 test cases using LLM
📝 Writing test files to tests/generated/api/
✅ Generated 5 test files:
- tests/generated/api/get__users.test.ts
- tests/generated/api/post__users.test.ts
- tests/generated/api/get__users__id_.test.ts
- tests/generated/api/put__users__id_.test.ts
- tests/generated/api/delete__users__id_.test.tsExample output (without AI - free mode):
📄 Discovering OpenAPI specification...
Found: openapi.yaml
🔧 Parsing and normalizing OpenAPI spec...
Endpoints found: 5
ℹ️ Using deterministic test generation (no LLM)
✨ Generated 6 test cases
📝 Writing test files to tests/generated/api/
✅ Generated 5 test filesUsing Swagger URL:
📡 Fetching OpenAPI spec from URL: https://api.example.com/swagger.json
✅ Downloaded OpenAPI spec to .qaai-temp/openapi-remote.json
🔧 Parsing and normalizing OpenAPI spec...
Endpoints found: 20
🤖 Using LLM-powered test generation
✨ Generated 45 test cases using LLMStep 6: Run Your Tests
QA_BASE_URL=http://localhost:3000 npx qaai runWhat this does: Runs all generated tests against your API at the specified URL.
Replace http://localhost:3000 with your actual API URL.
📖 Complete Example
Let's say you have this simple OpenAPI specification:
File: openapi.yaml
openapi: 3.0.0
info:
title: Users API
version: 1.0.0
paths:
/users:
get:
summary: Get all users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
type: object
'401':
description: Unauthorized
/users/{id}:
get:
summary: Get user by ID
parameters:
- name: id
in: path
required: true
schema:
type: integer
responses:
'200':
description: Success
'404':
description: User not foundRun QAAI:
npx qaai generateQAAI generates this test file:
File: tests/generated/api/get__users.test.ts
import axios from 'axios';
const baseUrl = process.env.QA_BASE_URL || 'http://localhost:3000';
describe('GET /users', () => {
// Test 1: Happy path
it('should return list of users successfully', async () => {
const response = await axios.get(
`${baseUrl}/users`,
{ validateStatus: () => true }
);
expect(response.status).toBe(200);
expect(Array.isArray(response.data)).toBe(true);
});
// Test 2: Unauthorized error
it('should return 401 when not authenticated', async () => {
const response = await axios.get(
`${baseUrl}/users`,
{
headers: {}, // No auth headers
validateStatus: () => true
}
);
expect(response.status).toBe(401);
});
// Test 3: Edge case - empty result
it('should handle empty user list', async () => {
const response = await axios.get(
`${baseUrl}/users?limit=0`,
{ validateStatus: () => true }
);
expect(response.status).toBe(200);
expect(response.data).toEqual([]);
});
});You can now run these tests:
QA_BASE_URL=http://localhost:3000 npm test🎓 Understanding the Commands
qaai init
What it does: Creates a configuration file
When to use: First time setting up QAAI in a project
npx qaai initOptions:
--forceor-f: Overwrite existing config file
Example:
npx qaai init --forceqaai generate
What it does: Reads your OpenAPI spec and generates test files
When to use: After creating/updating your API specification
npx qaai generateOptions:
--config <path>or-c <path>: Use custom config file location
Example:
npx qaai generate --config ./custom-config.jsonqaai run
What it does: Executes all generated tests
When to use: After generating tests or when you want to test your API
npx qaai runYou must set the API URL:
QA_BASE_URL=http://localhost:3000 npx qaai runWith authentication:
QA_BASE_URL=https://api.example.com \
QA_AUTH_TOKEN=your-bearer-token \
npx qaai run⚙️ Configuration Guide
Config File (qaai.config.json)
This file controls how QAAI works. Here's what each setting means:
{
"openapiPath": "openapi.yaml",
"outputDir": "tests/generated/api",
"testFramework": "jest",
"baseUrlEnvVar": "QA_BASE_URL",
"authHeaderEnvVar": "QA_AUTH_TOKEN",
"llm": {
"provider": "openai",
"model": "gpt-4-turbo-preview",
"temperature": 0.3,
"maxTokens": 2000,
"timeoutMs": 30000
}
}Configuration Explained
| Setting | What It Does | Example |
|---------|-------------|---------|
| openapiPath | Local file path OR Swagger URL | "openapi.yaml" OR "https://api.example.com/swagger.json" |
| outputDir | Where to save generated test files | "tests/generated/api" |
| testFramework | Which test framework to use | "jest" (only option currently) |
| baseUrlEnvVar | Environment variable name for API URL | "QA_BASE_URL" |
| authHeaderEnvVar | Environment variable for auth token | "QA_AUTH_TOKEN" |
| llm.apiKey | Your OpenAI API key | "sk-..." (or use env var) |
| llm.provider | AI service to use | "openai" |
| llm.model | Which AI model to use | "gpt-4-turbo-preview" or "gpt-3.5-turbo" |
| llm.temperature | How creative the AI should be (0-1) | 0.3 (lower = more consistent) |
| llm.maxTokens | Maximum response length from AI | 2000 |
| llm.timeoutMs | How long to wait for AI response | 30000 (30 seconds) |
Environment Variables
These are set in your terminal or CI/CD system:
| Variable | Required? | What It's For |
|----------|-----------|---------------|
| OPENAI_API_KEY | Optional | Your OpenAI API key (can also add to config). If not set, uses free deterministic mode |
| QA_BASE_URL | Yes (when running tests) | The URL where your API is running |
| QA_AUTH_TOKEN | No | Bearer token if your API needs authentication |
Priority: Config file apiKey > Environment variable OPENAI_API_KEY
Security Best Practices:
- ✅ Recommended: Use environment variable for API key
- ⚠️ Less Secure: Add to config file (convenient but don't commit to git!)
- 🔒 Never commit API keys to version control
- 🔐 Use
.envfile locally and secrets in CI/CD
Example: Setting environment variables
# On Mac/Linux
export OPENAI_API_KEY=sk-your-key-here
export QA_BASE_URL=http://localhost:3000
export QA_AUTH_TOKEN=your-bearer-token
# On Windows (Command Prompt)
set OPENAI_API_KEY=sk-your-key-here
set QA_BASE_URL=http://localhost:3000
# On Windows (PowerShell)
$env:OPENAI_API_KEY="sk-your-key-here"
$env:QA_BASE_URL="http://localhost:3000"🤖 AI vs Deterministic Mode
QAAI offers two modes for generating tests:
AI Mode (Recommended)
How it works: Uses OpenAI GPT-4 to understand your API and generate intelligent tests
Advantages:
- ✅ Understands API context and business logic
- ✅ Generates more realistic test data
- ✅ Better edge case coverage
- ✅ Creates more meaningful test descriptions
Requirements:
- OpenAI API key (environment variable OR config file)
- Internet connection
- Small cost (~$0.21 per 20 endpoints with GPT-4)
Two ways to provide API key:
Option 1: Environment Variable (Recommended - More Secure)
export OPENAI_API_KEY=sk-your-key-here
npx qaai generate # Uses AI automaticallyOption 2: Config File (Easier but Less Secure)
{
"llm": {
"apiKey": "sk-your-key-here",
"provider": "openai",
"model": "gpt-4-turbo-preview"
}
}⚠️ Important: If you add API key to config file, add qaai.config.json to .gitignore!
Deterministic Mode (Free)
How it works: Uses rule-based logic to generate tests
Advantages:
- ✅ Completely free
- ✅ Works offline
- ✅ No API key needed
- ✅ Faster generation
Limitations:
- ⚠️ Basic test scenarios only
- ⚠️ Generic test data
- ⚠️ May miss complex edge cases
Example:
# Don't set OPENAI_API_KEY
npx qaai generate # Uses deterministic modeOutput comparison:
AI Mode:
ℹ️ Using LLM-powered test generation
✨ Generated 15 test cases using LLM
Deterministic Mode:
ℹ️ Using deterministic test generation (no LLM)
✨ Generated 6 test cases💰 Cost Information
OpenAI Costs
QAAI uses OpenAI's API, which charges based on usage:
| Model | Input Cost | Output Cost | Typical Cost per 20 Endpoints | |-------|-----------|-------------|-------------------------------| | GPT-4 Turbo | $0.01/1K tokens | $0.03/1K tokens | ~$0.21 | | GPT-3.5 Turbo | $0.0005/1K tokens | $0.0015/1K tokens | ~$0.02 |
To use cheaper GPT-3.5:
{
"llm": {
"provider": "openai",
"model": "gpt-3.5-turbo"
}
}Free alternative: Use deterministic mode (no API key = $0.00)
📚 Test Generation Explained
QAAI generates three types of tests for each endpoint:
1. 🎯 Happy Path Tests
What: Tests that everything works correctly
Example: User sends valid data → API returns success
it('should create user successfully with valid data', async () => {
const response = await axios.post(`${baseUrl}/users`, {
name: 'John Doe',
email: '[email protected]'
});
expect(response.status).toBe(201);
expect(response.data).toHaveProperty('id');
});2. ❌ Error Tests
What: Tests how API handles errors
Example: User sends invalid data → API returns error
it('should return 400 when email is invalid', async () => {
const response = await axios.post(`${baseUrl}/users`, {
name: 'John Doe',
email: 'not-an-email'
}, { validateStatus: () => true });
expect(response.status).toBe(400);
expect(response.data.error).toContain('email');
});3. 🔍 Edge Case Tests
What: Tests unusual but valid scenarios
Example: Empty list, boundary values, pagination limits
it('should return empty array when no users exist', async () => {
const response = await axios.get(`${baseUrl}/users?limit=0`);
expect(response.status).toBe(200);
expect(response.data).toEqual([]);
});🔗 CI/CD Integration (GitHub Actions)
Add QAAI to your continuous integration pipeline:
Create file: .github/workflows/api-tests.yml
name: API Tests
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
# 1. Get the code
- name: Checkout code
uses: actions/checkout@v4
# 2. Set up Node.js
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
# 3. Install dependencies
- name: Install dependencies
run: npm ci
# 4. Generate tests with AI
- name: Generate API tests
run: npx qaai generate
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# 5. Start your API (if needed)
- name: Start API server
run: npm run start:api &
# 6. Wait for API to be ready
- name: Wait for API
run: npx wait-on http://localhost:3000/health
# 7. Run the tests
- name: Run API tests
run: npx qaai run
env:
QA_BASE_URL: http://localhost:3000Set up GitHub Secrets:
- Go to your repository on GitHub
- Click Settings → Secrets and variables → Actions
- Click New repository secret
- Add
OPENAI_API_KEYwith your OpenAI key
🏗️ Project Structure
After running QAAI, your project will look like this:
your-project/
├── node_modules/
├── tests/
│ └── generated/
│ └── api/ # ← QAAI generates files here
│ ├── get__users.test.ts
│ ├── post__users.test.ts
│ ├── get__users__id_.test.ts
│ ├── put__users__id_.test.ts
│ └── delete__users__id_.test.ts
├── openapi.yaml # ← Your API specification
├── qaai.config.json # ← QAAI configuration
├── package.json
└── .env # ← Optional: Store OPENAI_API_KEY here🎓 Common Use Cases
Use Case 1: New API Development
Scenario: You're building a new API and want automated tests
# 1. Write your OpenAPI spec
# 2. Generate tests
npx qaai generate
# 3. Run tests as you develop
QA_BASE_URL=http://localhost:3000 npx qaai runUse Case 2: Existing API (Add Tests)
Scenario: You have an existing API without tests
# 1. Create OpenAPI spec from your API
# 2. Install QAAI
npm install --save-dev qaai
# 3. Generate tests
npx qaai init
npx qaai generate
# 4. Run against production
QA_BASE_URL=https://api.yoursite.com npx qaai runUse Case 3: API Changes (Regression Testing)
Scenario: You updated your API and want to ensure nothing broke
# 1. Update your OpenAPI spec
# 2. Regenerate tests
npx qaai generate
# 3. Run tests to verify changes
QA_BASE_URL=http://localhost:3000 npx qaai runUse Case 4: Multiple Environments
Scenario: Test the same API in dev, staging, and production
# Development
QA_BASE_URL=http://localhost:3000 npx qaai run
# Staging
QA_BASE_URL=https://staging-api.example.com npx qaai run
# Production (read-only tests only!)
QA_BASE_URL=https://api.example.com npx qaai runUse Case 5: Using Swagger URL (NEW!)
Scenario: Your API has a Swagger endpoint and you don't want to download the spec file
{
"openapiPath": "https://petstore.swagger.io/v2/swagger.json",
"outputDir": "tests/generated/api",
"testFramework": "jest"
}npx qaai generate # Automatically fetches from Swagger URLCommon Swagger URL patterns:
- Swagger 2.0:
https://api.example.com/swagger.json - Swagger UI:
https://api.example.com/swagger/v1/swagger.json - Spring Boot:
https://api.example.com/v3/api-docs - NestJS:
https://api.example.com/api-json
🔧 Troubleshooting
Problem: "Cannot find OpenAPI specification"
Solution:
- Make sure
openapi.yamlexists in your project root, OR - Set
openapiPathinqaai.config.jsonto the correct location OR Swagger URL
{
"openapiPath": "./docs/my-api-spec.yaml"
}OR use Swagger URL:
{
"openapiPath": "https://api.example.com/swagger.json"
}Problem: "Using deterministic test generation (no LLM)"
Solution: This means QAAI is using free mode. This is NORMAL if you don't have an API key. To enable AI:
Option 1: Add to environment variable
# Check if you set the API key
echo $OPENAI_API_KEY
# If empty, set it:
export OPENAI_API_KEY=sk-your-key-here
# Then generate again
npx qaai generateOption 2: Add to config file
{
"llm": {
"apiKey": "sk-your-key-here"
}
}OR just continue with free mode - it works fine for basic testing!
Problem: Tests fail with "connect ECONNREFUSED"
Solution: Your API isn't running. Make sure:
- Your API is started:
npm run start(or however you start it) - It's running on the URL you specified in
QA_BASE_URL - The URL is correct (check port number)
# Check if API is running
curl http://localhost:3000
# If not, start it first
npm run start
# Then in another terminal:
QA_BASE_URL=http://localhost:3000 npx qaai runProblem: "401 Unauthorized" in all tests
Solution: Your API needs authentication:
# Set auth token
export QA_AUTH_TOKEN=your-bearer-token
# Or set in config
# qaai.config.json:
{
"authHeaderEnvVar": "QA_AUTH_TOKEN"
}Problem: Too many tests fail
Solution:
- Make sure your API is actually working: Test manually with Postman/curl
- Check if OpenAPI spec matches your actual API
- Look at the generated test files - they might need manual adjustments
📖 Additional Resources
- LLM Integration Guide - Deep dive into AI features
- Example OpenAPI Specs - Sample specs to try
- Contributing Guide - Help improve QAAI
🗺️ Roadmap
Current Version (1.0):
- ✅ OpenAI GPT-4 integration
- ✅ Jest test generation
- ✅ OpenAPI 3.x support
- ✅ Happy path, error, and edge case tests
Coming Soon:
- 🔄 Anthropic Claude support
- 🔄 Azure OpenAI support
- 🔄 GraphQL API support
- 🔄 Playwright test generation (UI testing)
- 🔄 Postman collection export
- 🔄 Test result reporting
🤝 Contributing
We welcome contributions! Whether you're:
- 🐛 Reporting bugs
- 💡 Suggesting features
- 📝 Improving documentation
- 🔧 Submitting pull requests
Check out our Contributing Guide to get started.
📄 License
MIT License - See LICENSE file for details
💬 Support & Community
- 🐛 Found a bug? Open an issue
- 💡 Have a feature idea? Start a discussion
- 📧 Need help? Check our docs or open an issue
⭐ Show Your Support
If QAAI helps you save time, please give it a star ⭐ on GitHub!
Made with ❤️ for developers who want to spend less time writing tests and more time building features.
