@bhagat-surya-dev/dashchat-database-manager
v1.0.12
Published
AI-powered database schema analysis and management library
Maintainers
Readme
@bhagat-surya-dev/dashchat-database-manager
AI-powered database schema analysis and management library with support for multiple database types.
Installation
npm install @bhagat-surya-dev/dashchat-database-managerPeer Dependencies
Install the database driver for your specific database:
# For PostgreSQL
npm install pg @types/pg
# For MySQL/MariaDB
npm install mysql2
# For SQLite
npm install sqlite3
# For MongoDB
npm install mongodbQuick Start
import DatabaseManager from '@bhagat-surya-dev/dashchat-database-manager';
// Initialize the manager
const dbManager = new DatabaseManager({
cerebrasApiKey: 'your-cerebras-api-key',
databaseUrl: 'postgresql://user:pass@localhost:5432/mydb' // Optional
});
// Test connection
const isConnected = await dbManager.testConnection();
console.log('Connected:', isConnected);
// Get schema information
const schema = await dbManager.getSchemaInfo();
console.log('Tables:', schema.tables.length);
// AI-powered schema analysis
const analyzedSchema = await dbManager.analyzeAndCacheSchema(
'postgresql://user:pass@localhost:5432/mydb',
'my-database-id'
);Constructor Options
DbManagerOptions
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| cerebrasApiKey | string | Yes | Your Cerebras Cloud API key for AI analysis |
| databaseUrl | string | No | Default database connection string (can be provided later to methods) |
| supabaseConfig | SupabaseConnectionConfig | No | Secure Supabase authentication (alternative to databaseUrl) |
| model | string | No | AI model to use (defaults to "llama-3.3-70b") |
Example
// Traditional database URL approach
const dbManager = new DatabaseManager({
cerebrasApiKey: process.env.CEREBRAS_API_KEY!,
databaseUrl: process.env.DATABASE_URL // Optional default
});
// Secure Supabase configuration approach (recommended)
const dbManager = new DatabaseManager({
cerebrasApiKey: process.env.CEREBRAS_API_KEY!,
supabaseConfig: {
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: process.env.SUPABASE_SERVICE_KEY!, // Read-only service key
schema: 'public' // Optional, defaults to 'public'
}
});Methods
testConnection(databaseUrl?: string, supabaseConfig?: SupabaseConnectionConfig): Promise<boolean>
Tests database connectivity.
Parameters:
databaseUrl(optional): Database connection string. Uses constructor URL if not provided.supabaseConfig(optional): Supabase configuration for secure authentication.
Returns: Promise resolving to true if connection successful.
// Using constructor URL
const isConnected = await dbManager.testConnection();
// Using specific URL
const isConnected = await dbManager.testConnection('postgresql://user:pass@host:5432/db');
// Using Supabase config (secure)
const isConnected = await dbManager.testConnection(null, {
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: 'your-service-key'
});getSchemaInfo(databaseUrl?: string, supabaseConfig?: SupabaseConnectionConfig): Promise<FormattedSchema>
Extracts database schema information.
Parameters:
databaseUrl(optional): Database connection string. Uses constructor URL if not provided.supabaseConfig(optional): Supabase configuration for secure authentication.
Returns: Promise resolving to formatted schema object.
// Using constructor URL
const schema = await dbManager.getSchemaInfo();
console.log(`Found ${schema.tables.length} tables`);
// Using Supabase config (secure)
const schema = await dbManager.getSchemaInfo(null, {
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: 'your-service-key',
schema: 'public' // optional
});
schema.tables.forEach(table => {
console.log(`Table: ${table.name}`);
console.log(`Columns: ${table.columns.map(c => c.name).join(', ')}`);
});analyzeAndCacheSchema(databaseUrl: string, databaseId: string, forceAnalysis?: boolean): Promise<AnalyzedSchema | null>
Performs AI-powered schema analysis with detailed descriptions. Now includes automatic chunking to handle large schemas without token overload.
Parameters:
databaseUrl(required): Database connection stringdatabaseId(required): Unique identifier for the databaseforceAnalysis(optional): Force new analysis, skip cache (default: false)
Returns: Promise resolving to analyzed schema with AI-generated descriptions.
🚀 Chunking Features:
- Automatic detection: Schemas > 8 tables or > 6000 tokens are automatically chunked
- Smart batching: Processes 8 tables per API call to stay within token limits
- Progress tracking: Shows chunk progress in console logs
- Graceful fallbacks: If a chunk fails, continues with remaining chunks
- Token estimation: Calculates approximate tokens before API calls
const analyzed = await dbManager.analyzeAndCacheSchema(
'postgresql://user:pass@localhost:5432/mydb',
'production-db',
false // Use cache if available
);
if (analyzed) {
// Display overall schema summary (automatically generated)
console.log('Overall Summary:', analyzed.overallSummary);
analyzed.tables.forEach(table => {
console.log(`Table: ${table.table_name}`);
console.log(`Description: ${table.description}`);
table.columns.forEach(column => {
console.log(` ${column.column_name}: ${column.description}`);
});
});
}
// For large databases (50+ tables), you'll see chunking in action:
// 📦 Chunking 52 tables into batches of 8...
// 📊 Processing 7 chunks...
// ⏳ Processing chunk 1/7 (8 tables)...
// ✅ Chunk 1 completed: 8 tables analyzed
// ... (continues for all chunks)getOverallSchemaSummary(analyzedSchema: AnalyzedSchema): Promise<string>
Generates a concise, high-level summary of the entire database schema using AI.
Parameters:
analyzedSchema(required): The analyzed schema object fromanalyzeAndCacheSchema()
Returns: Promise resolving to a one-paragraph summary describing the database's overall purpose, table count, and core tables.
// First, get the analyzed schema
const analyzed = await dbManager.analyzeAndCacheSchema(
'postgresql://user:pass@localhost:5432/ecommerce',
'ecommerce-db'
);
// Generate a standalone summary (if you need it separately)
const summary = await dbManager.getOverallSchemaSummary(analyzed);
console.log('Database Summary:', summary);
// Note: The summary is also automatically included in the analyzed schema
// as analyzed.overallSummary when you call analyzeAndCacheSchema()validateConnectionString(connectionString: string)
Validates connection string format and determines database type.
Parameters:
connectionString(required): Connection string to validate
Returns: Validation result object.
const validation = dbManager.validateConnectionString('postgresql://user:pass@host:5432/db');
console.log('Valid:', validation.isValid);
console.log('Type:', validation.type);
console.log('Errors:', validation.errors);getSupportedDatabaseTypes(): string[]
Returns array of supported database types.
const supported = dbManager.getSupportedDatabaseTypes();
console.log('Supported:', supported); // ['postgres', 'postgresql', 'mysql', 'mariadb', 'mongodb', 'sqlite']getDatabaseUrlType(databaseUrl?: string): string | null
Automatically detects and returns the database type from a connection URL.
Parameters:
databaseUrl(optional): Database connection string. Uses constructor URL if not provided.
Returns: Database type string ('postgres', 'mysql', 'mongodb', 'sqlite', etc.) or null if unsupported.
// Using constructor URL
const dbManager = new DatabaseManager({
cerebrasApiKey: 'your-key',
databaseUrl: 'postgresql://user:pass@localhost:5432/mydb'
});
const type = dbManager.getDatabaseUrlType();
console.log('Database type:', type); // 'postgres'
// Using specific URL
const mysqlType = dbManager.getDatabaseUrlType('mysql://user:pass@localhost:3306/mydb');
console.log('Database type:', mysqlType); // 'mysql'
const mongoType = dbManager.getDatabaseUrlType('mongodb://localhost:27017/mydb');
console.log('Database type:', mongoType); // 'mongodb'
const sqliteType = dbManager.getDatabaseUrlType('./database.db');
console.log('Database type:', sqliteType); // 'sqlite'Supabase Secure Authentication
For enhanced security with Supabase databases, use the config-based authentication instead of database URLs:
Why Use Supabase Config?
Security Benefits:
- ✅ No database credentials exposed
- ✅ Users control permissions via Supabase dashboard
- ✅ Easy to revoke/rotate service keys
- ✅ No need to share database passwords
- ✅ Granular access control through API keys
Setup Steps
Get your Supabase URL:
https://your-project.supabase.coCreate a service role key:
- Go to Supabase Dashboard → Settings → API
- Copy the
service_rolekey (starts witheyJ...) - Or create a custom key with read-only permissions
Use config instead of URL:
// ❌ Old way (less secure)
const dbManager = new DatabaseManager({
cerebrasApiKey: 'your-key',
databaseUrl: 'postgresql://postgres:[email protected]:5432/postgres'
});
// ✅ New way (secure)
const dbManager = new DatabaseManager({
cerebrasApiKey: 'your-key',
supabaseConfig: {
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: process.env.SUPABASE_SERVICE_KEY!, // Much safer!
schema: 'public' // optional
}
});Supabase Methods
configureSupabase(config: SupabaseConnectionConfig): void
Configure Supabase after initialization:
const manager = new DatabaseManager({ cerebrasApiKey: 'your-key' });
manager.configureSupabase({
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: 'your-service-key'
});validateSupabaseConfig(config: SupabaseConnectionConfig)
Validate Supabase configuration without connecting:
const validation = manager.validateSupabaseConfig({
supabaseUrl: 'https://your-project.supabase.co',
serviceKey: 'your-service-key'
});
console.log('Valid:', validation.isValid);
console.log('Errors:', validation.errors);isSupabaseConfigured(): boolean
Check if Supabase is configured:
if (manager.isSupabaseConfigured()) {
console.log('✅ Supabase is ready to use');
const schema = await manager.getSchemaInfo(); // Uses configured Supabase
}Environment Variables for Supabase
CEREBRAS_API_KEY=your_cerebras_api_key_here
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_KEY=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...Connection Strings
PostgreSQL
postgresql://username:password@hostname:port/database
postgres://username:password@hostname:port/databaseMySQL/MariaDB
mysql://username:password@hostname:port/database
mariadb://username:password@hostname:port/databaseSQLite
sqlite:///path/to/database.db
./database.db
/absolute/path/to/database.dbMongoDB
mongodb://username:password@hostname:port/database
mongodb+srv://username:[email protected]/databaseEnvironment Variables
Create a .env file in your project root:
CEREBRAS_API_KEY=your_cerebras_api_key_here
DATABASE_URL=postgresql://user:pass@localhost:5432/mydbError Handling
The library provides enhanced error messages for common issues:
try {
await dbManager.testConnection('invalid://connection/string');
} catch (error) {
console.error('Connection failed:', error.message);
// Enhanced error messages like:
// - "Database host not found. Please verify the hostname..."
// - "Authentication failed. Please verify your username and password."
// - "Connection refused. Please check if the database server is running..."
}TypeScript Support
Full TypeScript support with exported interfaces:
import DatabaseManager, { DbManagerOptions } from '@bhagat-surya-dev/dashchat-database-manager';
interface AnalyzedSchema {
tables: AnalyzedTable[];
overallSummary?: string; // AI-generated database summary
}
interface AnalyzedTable {
table_name: string;
description: string;
columns: AnalyzedColumn[];
}
interface AnalyzedColumn {
column_name: string;
description: string;
data_type: string;
nullable: boolean;
}Smart Chunking System (NEW)
Prevents LLM token overload for large database schemas with automatic chunking and optimization.
How Chunking Works
- Token Estimation: Calculates approximate tokens before sending to AI
- Automatic Detection: Triggers chunking when:
- Schema has > 8 tables, OR
- Estimated tokens > 6000
- Smart Batching: Splits tables into chunks of 8 tables each
- Sequential Processing: Processes chunks one by one with small delays
- Result Combination: Merges all chunk results into final schema
Chunking in Action
// For a large e-commerce database with 52 tables:
const analyzed = await dbManager.analyzeAndCacheSchema(
'postgresql://user:pass@localhost:5432/large_ecommerce',
'large-ecommerce-db'
);
// Console output shows chunking process:
// 📊 Estimated tokens for full schema: 15,240
// 📦 Schema is large, using chunking strategy
// 📦 Chunking 52 tables into batches of 8...
// 📊 Processing 7 chunks...
// ⏳ Processing chunk 1/7 (8 tables)...
// ✅ Chunk 1 completed: 8 tables analyzed
// ⏳ Processing chunk 2/7 (8 tables)...
// ✅ Chunk 2 completed: 8 tables analyzed
// ... continues for all 7 chunks
// 🎉 Chunked analysis completed: 52 total tables analyzedBenefits
- No more token limit errors 📈
- Handles any schema size (tested with 100+ tables)
- Maintains analysis quality per chunk
- Graceful error recovery if individual chunks fail
- Performance optimized with 500ms delays between chunks
Schema Summary Feature
The library now automatically generates an overall database summary when performing AI analysis:
Automatic Summary Generation
When you call analyzeAndCacheSchema(), the system automatically:
- Analyzes each table and column with detailed AI-generated descriptions
- Generates an overall summary describing the database's purpose and key tables
- Includes the summary in the returned schema object as
overallSummary
Example Usage
const analyzed = await dbManager.analyzeAndCacheSchema(
'mongodb://admin:password123@localhost:27017/ecommerce_startup',
'ecommerce-startup-db'
);
// The summary is automatically available
console.log('📋 Database Summary:');
console.log(analyzed.overallSummary);
// Output: "This database supports a comprehensive e-commerce platform with 8 tables..."
// Save analysis and summary to files
import * as fs from 'fs/promises';
// Save full analysis
await fs.writeFile('schema_analysis.json', JSON.stringify(analyzed, null, 2));
// Save summary report
const summaryReport = `Database Schema Summary Report
Generated: ${new Date().toISOString()}
Total Tables: ${analyzed.tables.length}
SUMMARY:
${analyzed.overallSummary}
TABLES:
${analyzed.tables.map(table =>
`- ${table.table_name}: ${table.description}`
).join('\n')}`;
await fs.writeFile('schema_summary.txt', summaryReport);Summary Content
The AI-generated summary includes:
- Database purpose (e.g., "e-commerce platform", "CRM system")
- Total number of tables
- Core/important tables identified
- Overall architecture description
Recent Updates
Supabase Schema Extraction Fix (v1.0.11)
Issue Resolved: Schema extraction was failing with "public.information_schema.tables does not exist" error.
🔧 Core Fixes Implemented
Fixed Connection Testing
- Before: Used
supabase.rpc('version')which doesn't exist - After: Uses
supabase.from('pg_tables')for reliable connection validation - Result: ✅ Connection testing now works consistently
- Before: Used
Implemented PostgREST OpenAPI Discovery
- Before: Tried
information_schema.tables(not available via PostgREST) - After: Uses
GET /rest/v1/OpenAPI endpoint to discover all accessible tables - Result: ✅ Now finds all tables dynamically (no hardcoding needed)
- Before: Tried
Enhanced Schema Extraction with Multi-Tier Approach
- Tier 1: Try auto-created RPC function for optimal performance
- Tier 2: Fall back to PostgREST OpenAPI discovery
- Tier 3: Extract column info from sample data
- Result: ✅ Robust extraction that works in various permission scenarios
📊 Test Results
✅ Connection Testing: Works consistently
✅ Table Discovery: 14 tables found via OpenAPI
✅ AI Analysis: Full descriptions generated
✅ Schema Extraction: Complete end-to-end functionality
Sample AI Output:
"This database appears to be designed for a multi-tenant
SaaS platform with user management and organization
configuration capabilities..."
Tables with AI Descriptions:
- users: "Stores information about registered users..."
- feedback: "Stores user-submitted feedback..."
- admin_databases: "Stores information about administrative databases..."Before: Schema extraction failed completely
After: Full schema discovery + AI analysis working end-to-end
Features
- ✅ Multi-database support (PostgreSQL, MySQL, SQLite, MongoDB)
- ✅ Secure Supabase authentication with URL + Service Key (no database credentials needed)
- ✅ Fixed Supabase schema extraction with PostgREST OpenAPI discovery
- ✅ AI-powered schema analysis with Cerebras
- ✅ Automatic schema summary generation
- ✅ Smart chunking system to prevent LLM token overload for large schemas
- ✅ Token estimation and automatic optimization
- ✅ Batch processing for databases with 50+ tables
- ✅ Comprehensive error handling
- ✅ Connection validation
- ✅ TypeScript support
- ✅ Performance monitoring
- ✅ Detailed logging
- ✅ Schema optimization for AI accuracy
- ✅ Isolated Supabase manager for easier debugging
Requirements
- Node.js >= 16.0.0
- Valid Cerebras API key for AI analysis
- Appropriate database driver for your database type
License
MIT
Support
For issues and feature requests, please visit the GitHub repository.
