npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

supabase-ai-rls-tests-generator

v1.3.1

Published

Claude Sonnet will fetch your RLS policies from Supabase, then generate test cases (including edge cases), run the tests and save the tests and results specific files.

Readme

Supabase AI RLS Tests Generator

An AI-powered tool that automatically generates and runs comprehensive test cases for your Supabase Row Level Security (RLS) policies. Using Claude AI, it analyzes your policies and creates test scenarios to verify their effectiveness.

Please share your feedback sending email to renan[at]renanserrano.com.br

Features

  • 🤖 AI-powered test case generation
  • 🔒 Comprehensive RLS policy testing
  • 📊 Detailed test reports
  • 🚀 Easy setup and configuration
  • 💾 Automatic test case storage
  • 📝 Human-readable results

Installation

npm install supabase-ai-rls-tests-generator

Prerequisites

Before using this package, you need to:

  1. Have a Supabase project with RLS policies you want to test
  2. Install the required database function by running this SQL in your Supabase SQL editor:
CREATE OR REPLACE FUNCTION public.get_policies(target_table text)
RETURNS TABLE (
    table_name text,
    policy_name text,
    definition text,
    command text,
    permissive text
)
LANGUAGE SQL
SECURITY DEFINER
AS $$
    SELECT
        schemaname || '.' || tablename as table_name,
        policyname as policy_name,
        regexp_replace(regexp_replace(coalesce(qual, ''), '\n', ' ', 'g'), '\s+', ' ', 'g') as definition,
        cmd as command,
        permissive
    FROM pg_policies
    WHERE (schemaname || '.' || tablename) = target_table
    OR tablename = target_table;
$$;

Usage

Quick Start

  1. Run the setup wizard:
npx setup-variables
  1. Enter your credentials when prompted:
  • Supabase URL
  • Supabase service role key
  • Claude API key
  1. Run the tests:
npx test-rls

API Usage - Optional

You can also use the package programmatically:

import { SupabaseAITester } from 'supabase-ai-rls-tests-generator';

const tester = new SupabaseAITester({
  supabaseUrl: process.env.SUPABASE_RLS_URL,
  supabaseKey: process.env.SUPABASE_RLS_KEY,
  claudeKey: process.env.SUPABASE_RLS_CLAUDE_KEY,
  config: {
    verbose: true
  }
});

async function runTests() {
  try {
    const results = await tester.runRLSTests('your_table_name');
    console.log('Test Results:', results);
  } catch (error) {
    console.error('Test Error:', error);
  }
}

Configuration

The package uses a separate .env.rls-test file to store its configuration, ensuring it doesn't interfere with your project's existing .env file. The setup wizard will create this file for you with the following variables:

SUPABASE_RLS_URL=your_supabase_url
SUPABASE_RLS_KEY=your_supabase_key
SUPABASE_RLS_CLAUDE_KEY=your_claude_key

This file is automatically added to .gitignore to prevent accidentally committing sensitive information.

Test Coverage Options

When running npx test-rls, you can choose from three coverage levels:

1. Basic Coverage (4 tests)

  • Basic SELECT and INSERT operations
  • Default: 4 tests
  • Perfect for quick validations

2. Full CRUD (8 tests)

  • Complete CRUD operations
  • Success and failure cases
  • Default: 8 tests
  • Ideal for comprehensive testing

3. Edge Cases (12+ tests)

  • Full CRUD operations
  • Security scenarios
  • Data validation cases
  • Default: 12 tests
  • Best for production security checks

After selecting your coverage level, you can specify any custom number of test cases.

Token Allocation

The package automatically adjusts token allocation based on the number of test cases

Test Results

Test results are stored in the `generated` folder:

  • generated/tests: Contains the generated test cases
  • generated/results: Contains the test execution results

Each test run creates timestamped files so you can track changes over time.

Example Test Output

📊 Test Summary

Results: 10 failed, 2 passed of 12 total Time: 20.10s Coverage: 16.7%

{
  "timestamp": "2024-02-25T14-30-45-789Z",
  "total": 10,
  "passed": 8,
  "failed": 2,
  "details": [
    {
      "test": {
        "description": "User can read their own posts",
        "method": "select",
        "path": "posts",
        "expectedStatus": 200
      },
      "success": true,
      "actual": 200,
      "expected": 200
    }
    // ... more test results
  ]
}

Customizing Test Generation

Test Generation

You can customize how tests are generated by modifying the prompt template in src/index.ts. Find the generateTestCases method:

case 'basic':
 promptContent = \`Generate \${config.testCount || 4} test cases focusing on:
 - Successful and failed SELECT operations
 - Successful and failed INSERT operations
 For each case, test both authorized and unauthorized scenarios.\`;
 break;

Modify the template to:

  • Add specific test scenarios
  • Include custom validation rules
  • Focus on particular security aspects

Result Format Customization

JSON Response Structure

You can customize the test result format by modifying both the expected JSON structure and TypeScript interfaces:

  1. Modify the JSON structure in generateTestCases src/index.ts:
{
  "method": "select",
  "path": "users",
  "priority": "high" | "medium" | "low",  // Added field
  "description": "test description",
  "body": {
    "user_id": "uuid"
  },
  "expectedStatus": 200
}
  1. Update the interfaces in types.ts:
export interface TestCase {
  method: SupabaseMethod;
  path: string;
  priority: 'high' | 'medium' | 'low';  // Added field
  body?: any;
  queryParams?: Record<string, string>;
  headers?: Record<string, string>;
  expectedStatus: number;
  description: string;
}

export interface TestResult {
  test: TestCase;
  success: boolean;
  actual: number;
  expected: number;
  error?: string;
  priority: 'high' | 'medium' | 'low';  // Added field
}

Important Notes

  • Always keep TypeScript interfaces and JSON structure in sync
  • Update all related interfaces when adding new fields
  • Consider backward compatibility when making changes `;

Modifying the AI Prompt

You can customize how tests are generated by modifying the prompt template in src/index.ts. Look for the generateTestCases method and adjust the promptContent for each coverage level:

case 'basic':
 promptContent = `Generate ${config.testCount || 4} test cases focusing on:
 - Successful and failed SELECT operations
 - Successful and failed INSERT operations
 For each case, test both authorized and unauthorized scenarios.`;
 break;

Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

If you encounter any issues or have questions:

  1. Check the Issues page
  2. Open a new issue if needed
  3. Join the discussion in existing issues

Authors

Acknowledgments

  • Thanks to Supabase for their amazing platform
  • Thanks to Anthropic for Claude AI
  • Thanks to all contributors who help improve this project