supabase-ai-rls-tests-generator
v1.3.1
Published
Claude Sonnet will fetch your RLS policies from Supabase, then generate test cases (including edge cases), run the tests and save the tests and results specific files.
Maintainers
Readme
Supabase AI RLS Tests Generator
An AI-powered tool that automatically generates and runs comprehensive test cases for your Supabase Row Level Security (RLS) policies. Using Claude AI, it analyzes your policies and creates test scenarios to verify their effectiveness.
Please share your feedback sending email to renan[at]renanserrano.com.br
Features
- 🤖 AI-powered test case generation
- 🔒 Comprehensive RLS policy testing
- 📊 Detailed test reports
- 🚀 Easy setup and configuration
- 💾 Automatic test case storage
- 📝 Human-readable results
Installation
npm install supabase-ai-rls-tests-generatorPrerequisites
Before using this package, you need to:
- Have a Supabase project with RLS policies you want to test
- Install the required database function by running this SQL in your Supabase SQL editor:
CREATE OR REPLACE FUNCTION public.get_policies(target_table text)
RETURNS TABLE (
table_name text,
policy_name text,
definition text,
command text,
permissive text
)
LANGUAGE SQL
SECURITY DEFINER
AS $$
SELECT
schemaname || '.' || tablename as table_name,
policyname as policy_name,
regexp_replace(regexp_replace(coalesce(qual, ''), '\n', ' ', 'g'), '\s+', ' ', 'g') as definition,
cmd as command,
permissive
FROM pg_policies
WHERE (schemaname || '.' || tablename) = target_table
OR tablename = target_table;
$$;Usage
Quick Start
- Run the setup wizard:
npx setup-variables- Enter your credentials when prompted:
- Supabase URL
- Supabase service role key
- Claude API key
- Run the tests:
npx test-rlsAPI Usage - Optional
You can also use the package programmatically:
import { SupabaseAITester } from 'supabase-ai-rls-tests-generator';
const tester = new SupabaseAITester({
supabaseUrl: process.env.SUPABASE_RLS_URL,
supabaseKey: process.env.SUPABASE_RLS_KEY,
claudeKey: process.env.SUPABASE_RLS_CLAUDE_KEY,
config: {
verbose: true
}
});
async function runTests() {
try {
const results = await tester.runRLSTests('your_table_name');
console.log('Test Results:', results);
} catch (error) {
console.error('Test Error:', error);
}
}Configuration
The package uses a separate .env.rls-test file to store its configuration, ensuring it doesn't interfere with your project's existing .env file. The setup wizard will create this file for you with the following variables:
SUPABASE_RLS_URL=your_supabase_url
SUPABASE_RLS_KEY=your_supabase_key
SUPABASE_RLS_CLAUDE_KEY=your_claude_keyThis file is automatically added to .gitignore to prevent accidentally committing sensitive information.
Test Coverage Options
When running npx test-rls, you can choose from three coverage levels:
1. Basic Coverage (4 tests)
- Basic SELECT and INSERT operations
- Default: 4 tests
- Perfect for quick validations
2. Full CRUD (8 tests)
- Complete CRUD operations
- Success and failure cases
- Default: 8 tests
- Ideal for comprehensive testing
3. Edge Cases (12+ tests)
- Full CRUD operations
- Security scenarios
- Data validation cases
- Default: 12 tests
- Best for production security checks
After selecting your coverage level, you can specify any custom number of test cases.
Token Allocation
The package automatically adjusts token allocation based on the number of test cases
Test Results
Test results are stored in the `generated` folder:
generated/tests: Contains the generated test casesgenerated/results: Contains the test execution results
Each test run creates timestamped files so you can track changes over time.
Example Test Output
📊 Test Summary
Results: 10 failed, 2 passed of 12 total Time: 20.10s Coverage: 16.7%
{
"timestamp": "2024-02-25T14-30-45-789Z",
"total": 10,
"passed": 8,
"failed": 2,
"details": [
{
"test": {
"description": "User can read their own posts",
"method": "select",
"path": "posts",
"expectedStatus": 200
},
"success": true,
"actual": 200,
"expected": 200
}
// ... more test results
]
}Customizing Test Generation
Test Generation
You can customize how tests are generated by modifying the prompt template in src/index.ts. Find the generateTestCases method:
case 'basic':
promptContent = \`Generate \${config.testCount || 4} test cases focusing on:
- Successful and failed SELECT operations
- Successful and failed INSERT operations
For each case, test both authorized and unauthorized scenarios.\`;
break;Modify the template to:
- Add specific test scenarios
- Include custom validation rules
- Focus on particular security aspects
Result Format Customization
JSON Response Structure
You can customize the test result format by modifying both the expected JSON structure and TypeScript interfaces:
- Modify the JSON structure in
generateTestCasessrc/index.ts:
{
"method": "select",
"path": "users",
"priority": "high" | "medium" | "low", // Added field
"description": "test description",
"body": {
"user_id": "uuid"
},
"expectedStatus": 200
}- Update the interfaces in
types.ts:
export interface TestCase {
method: SupabaseMethod;
path: string;
priority: 'high' | 'medium' | 'low'; // Added field
body?: any;
queryParams?: Record<string, string>;
headers?: Record<string, string>;
expectedStatus: number;
description: string;
}
export interface TestResult {
test: TestCase;
success: boolean;
actual: number;
expected: number;
error?: string;
priority: 'high' | 'medium' | 'low'; // Added field
}Important Notes
- Always keep TypeScript interfaces and JSON structure in sync
- Update all related interfaces when adding new fields
- Consider backward compatibility when making changes `;
Modifying the AI Prompt
You can customize how tests are generated by modifying the prompt template in src/index.ts. Look for the generateTestCases method and adjust the promptContent for each coverage level:
case 'basic':
promptContent = `Generate ${config.testCount || 4} test cases focusing on:
- Successful and failed SELECT operations
- Successful and failed INSERT operations
For each case, test both authorized and unauthorized scenarios.`;
break;Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
If you encounter any issues or have questions:
- Check the Issues page
- Open a new issue if needed
- Join the discussion in existing issues
Authors
- Renan Serrano - renantrendt
