@memberjunction/db-auto-doc
v2.125.0
Published
AI-powered database documentation generator for SQL Server, MySQL, and PostgreSQL. Analyzes your database structure, uses AI to generate comprehensive descriptions, and saves them as metadata. Works standalone - no MemberJunction runtime required.
Downloads
1,331
Readme
DBAutoDoc - AI-Powered Database Documentation Generator
Automatically generate comprehensive documentation for SQL Server, MySQL, and PostgreSQL databases using AI. DBAutoDoc analyzes your database structure, uses Large Language Models to understand the purpose of tables and columns, and saves descriptions as database metadata (extended properties for SQL Server, comments for MySQL/PostgreSQL).
Features
Core Capabilities
- 🤖 AI-Powered Analysis - Uses OpenAI, Anthropic, Google, or Groq to generate intelligent descriptions
- 🔄 Iterative Refinement - Multi-pass analysis with backpropagation for accuracy
- 📊 Topological Processing - Analyzes tables in dependency order for better context
- 📈 Data-Driven - Leverages cardinality, statistics, and sample data for insights
- 🎯 Convergence Detection - Automatically knows when analysis is complete
- 💾 State Tracking - Full audit trail of all iterations and reasoning
- 🔌 Standalone - Works with ANY database, no MemberJunction required
Multi-Database Support
- SQL Server - Full support with extended properties
- PostgreSQL - Complete implementation with COMMENT syntax
- MySQL - Full support with column/table comments
- Unified Interface - Single configuration approach across all databases
Advanced Features
- 🔍 Relationship Discovery - Automatically detect missing primary and foreign keys using statistical analysis and LLM validation
- 🎯 Sample Query Generation - Generate reference SQL queries for AI agents with alignment tracking
- 🛡️ Granular Guardrails - Multi-level resource controls (run, phase, iteration limits)
- ⏸️ Resume Capability - Pause and resume analysis from checkpoint state files
- 📦 Programmatic API - Use as a library in your own applications
- 🔧 Extensible - Custom database drivers and analysis plugins
Output Formats
- SQL Scripts - Database-specific metadata scripts (extended properties, comments)
- Markdown Documentation - Human-readable docs with ERD diagrams
- HTML Documentation - Interactive, searchable documentation with embedded CSS/JS
- CSV Exports - Spreadsheet-ready table and column data
- Mermaid Diagrams - Standalone ERD files (.mmd and .html)
- Analysis Reports - Detailed metrics and quality assessments
Installation
Global Installation (Recommended for DBAs)
npm install -g @memberjunction/db-auto-docWithin MemberJunction Project
npm install @memberjunction/db-auto-docAs a Library Dependency
npm install @memberjunction/db-auto-doc --saveQuick Start
1. Initialize
db-auto-doc initThis interactive wizard will:
- Configure database connection
- Set up AI provider (OpenAI, Anthropic, Google, or Groq)
- Configure guardrails and resource limits
- Optionally add seed context for better analysis
- Create
config.json
2. Analyze
db-auto-doc analyzeThis will:
- Introspect your database structure
- Analyze data (cardinality, statistics, patterns)
- Optionally discover missing primary and foreign keys
- Build dependency graph
- Run iterative AI analysis with backpropagation
- Perform sanity checks
- Save state to
db-doc-state.json
3. Generate Sample Queries (Optional)
Generate reference SQL queries for AI agent training:
# During analysis (if enabled in config)
db-auto-doc analyze # Automatically generates queries
# Or generate separately from existing state
db-auto-doc generate-queries --from-state ./output/run-1/state.json
# With custom settings
db-auto-doc generate-queries --from-state ./output/run-1/state.json \
--queries-per-table 10 \
--max-execution-time 60000 \
--output-dir ./queriesThis generates:
- sample-queries.json: Full query specifications with SQL, metadata, and alignment info
- sample-queries-summary.json: Execution statistics, token usage, and cost breakdown
Configuration Options:
{
"analysis": {
"sampleQueryGeneration": {
"enabled": true, // Enable sample query generation
"queriesPerTable": 5, // Number of queries per table
"maxTables": 10, // Max tables to process (0 = all tables)
"tokenBudget": 100000, // Token limit (0 = unlimited)
"maxExecutionTime": 30000, // Query validation timeout (ms)
"includeMultiQueryPatterns": true, // Generate related query patterns
"validateAlignment": true, // Validate alignment between queries
"maxRowsInSample": 10, // Sample result rows to capture
"enableQueryFix": true, // Auto-fix failed queries (default: true)
"maxFixAttempts": 3, // Max fix attempts per query (default: 3)
"enableQueryRefinement": true, // LLM-based result analysis (default: false)
"maxRefinementAttempts": 1 // Max refinement iterations (default: 1)
}
}
}Query Fix & Refinement:
When a generated query fails validation, DBAutoDoc can automatically attempt to fix it:
- Query Fix: Passes the SQL, error message, and previous attempts to the LLM for correction
- Query Refinement: Analyzes sample results and asks LLM if the query matches the business purpose
The processing flow is: Generate → Validate/Fix → Refine → Validate/Fix → Done
This iterative approach significantly improves query quality, with most queries succeeding after 1-2 fix attempts.
Key Configuration Settings:
maxTables: Controls table selection10(default) - Generate queries for top 10 most important tables0- Generate queries for all tables with data- Custom value - Generate queries for top N tables
tokenBudget: Controls LLM token usage and cost100000(default) - Limit to 100K tokens (~$0.50-1.00 with GPT-4o)0- Unlimited token budget (useful withmaxTables: 0)- Custom value - Set specific token limit for cost control
Example Configurations:
Cost-conscious (default):
{
"maxTables": 10,
"tokenBudget": 100000
}Medium coverage (~25 tables):
{
"maxTables": 25,
"tokenBudget": 500000
}Complete coverage (all tables):
{
"maxTables": 0,
"tokenBudget": 0
}Model Recommendations:
- ✅ GPT-4o - Best balance of speed, cost, and quality (~$6-10 for 50 tables)
- ✅ Claude 3.5 Sonnet - High quality, good reasoning about alignment
- ⚠️ GPT-5 - Very slow (reasoning model), doesn't support JSON format, expensive
- ⚠️ Groq - Fast and cheap but may struggle with complex alignment
4. Export
db-auto-doc export --sql --markdown --html --csv --mermaidThis generates:
- SQL Script: Database-specific metadata statements
- Markdown Documentation: Human-readable docs with ERD links
- HTML Documentation: Interactive searchable documentation
- CSV Files: tables.csv and columns.csv for spreadsheet analysis
- Mermaid Diagrams: erd.mmd and erd.html for visualization
Optionally apply directly to database:
db-auto-doc export --sql --apply5. Export Sample Queries to Metadata (Optional)
Transform generated sample queries into MemberJunction metadata format for syncing to the database:
# Basic export
db-auto-doc export-sample-queries \
--input ./output/sample-queries.json \
--output ./metadata/queries/.queries.json
# Export with separate SQL files (uses @file: references)
db-auto-doc export-sample-queries \
--input ./output/sample-queries.json \
--output ./metadata/queries/.queries.json \
--separate-sql-files
# Set category and filter by quality
db-auto-doc export-sample-queries \
--input ./output/sample-queries.json \
--output ./metadata/queries/.queries.json \
--category "Database Documentation" \
--status Approved \
--min-confidence 0.8 \
--validated-onlyKey Flags:
--input, -i: Path to sample-queries.json from generate-queries--output, -o: Output path for .queries.json metadata file--separate-sql-files: Write SQL to separate files with@file:references--sql-dir: Directory for SQL files (default: "SQL")--category: Query category for@lookup:Query Categories.Name=...--status: Status to assign (Approved/Pending/Rejected/Expired)--min-confidence: Minimum confidence threshold (0-1)--validated-only: Only export successfully validated queries--append: Append to existing metadata file
After Export:
- Review the generated metadata file
- Ensure the Query Category exists in the database
- Run:
npx mj-sync push ./metadata/queries/
This integrates DBAutoDoc-generated queries with MemberJunction's metadata system for use by AI agents like Skip.
6. Check Status
db-auto-doc statusShows:
- Analysis progress and phase completion
- Convergence status
- Low-confidence tables and columns
- Token usage, cost, and duration
- Guardrail status and warnings
7. Resume Analysis
db-auto-doc analyze --resume ./db-doc-state.jsonResume a previous analysis from a checkpoint state file, useful for:
- Continuing after hitting guardrail limits
- Recovering from interruptions
- Incremental database updates
How It Works
Topological Analysis
DBAutoDoc processes tables in dependency order:
Level 0: Users, Products, Categories (no dependencies)
↓
Level 1: Orders (depends on Users), ProductCategories (Products + Categories)
↓
Level 2: OrderItems (depends on Orders + Products)
↓
Level 3: Shipments (depends on OrderItems)Processing in this order allows child tables to benefit from parent table context.
Relationship Discovery
For legacy databases missing primary/foreign key constraints, DBAutoDoc can:
- Detect Primary Keys using statistical analysis (uniqueness, nullability, cardinality)
- Find Foreign Keys using value overlap analysis and naming patterns
- LLM Validation to verify discovered relationships make business sense
- Backpropagation to refine parent table analysis based on child relationships
Triggered automatically when:
- Tables lack primary key constraints
- Insufficient foreign key relationships detected (below threshold)
Sample Query Generation
DBAutoDoc can generate reference SQL queries for AI agents, solving the query alignment problem where multi-query patterns (summary + detail) have inconsistent filtering logic:
The Problem:
-- Summary query
SELECT COUNT(*) FROM Registrations -- All registrations
-- Detail query
SELECT * FROM Registrations WHERE Status='Attended' -- Only attended
-- Result: Numbers don't match! Bad UX.The Solution: DBAutoDoc generates "gold standard" reference queries with:
- Explicit Filtering Rules - Documents filter logic for consistency
- Alignment Tracking - Links related queries via
relatedQueryIds - Query Patterns - Summary+Detail, Multi-Entity Drilldown, Time Series, etc.
- Validation - Executes queries and validates results
- Few-Shot Training - Use as examples for AI agent prompting
Two-Prompt Architecture:
- Planning Phase - AI designs what queries to create (lightweight, ~4K tokens)
- Generation Phase - AI generates SQL for each query individually (~3K tokens each)
This approach prevents JSON truncation issues while maintaining alignment context between related queries.
Use Cases:
- Training AI agents like Skip to generate consistent multi-query patterns
- Creating reference examples for few-shot prompting
- Documenting common query patterns for your database
- Validating that related queries use consistent filtering logic
Backpropagation
After analyzing child tables, DBAutoDoc can detect insights about parent tables and trigger re-analysis:
Level 0: "Persons" → Initially thinks: "General contact information"
↓
Level 1: "Students" table reveals Persons.Type has values: Student, Teacher, Staff
↓
BACKPROPAGATE to Level 0: "Persons" → Revise to: "School personnel with role-based typing"Convergence
Analysis stops when:
- No changes in last N iterations (stability window)
- All tables meet confidence threshold
- Max iterations reached
- Guardrail limits exceeded (tokens, cost, duration)
Granular Guardrails
Multi-level resource controls ensure analysis stays within bounds:
Run-Level Limits:
maxTokensPerRun: Total token budget for entire analysismaxDurationSeconds: Maximum wall-clock timemaxCostDollars: Maximum AI cost
Phase-Level Limits:
maxTokensPerPhase.discovery: Budget for relationship discoverymaxTokensPerPhase.analysis: Budget for description generationmaxTokensPerPhase.sanityChecks: Budget for validation
Iteration-Level Limits:
maxTokensPerIteration: Per-iteration token capmaxIterationDurationSeconds: Per-iteration time limit
Warning Thresholds:
- Configurable percentage-based warnings (default 80-85%)
- Early notification before hitting hard limits
Data Analysis
For each column, DBAutoDoc collects:
- Cardinality: Distinct value counts
- Statistics: Min, max, average, standard deviation
- Patterns: Common prefixes, format detection
- Value Distribution: Actual enum values if low cardinality
- Sample Data: Stratified sampling across value ranges
This rich context enables AI to make accurate inferences.
Configuration
SQL Server Configuration
{
"version": "1.0.0",
"database": {
"provider": "sqlserver",
"host": "localhost",
"database": "MyDatabase",
"user": "sa",
"password": "YourPassword",
"encrypt": true,
"trustServerCertificate": false
},
"ai": {
"provider": "openai",
"model": "gpt-4-turbo-preview",
"apiKey": "sk-...",
"temperature": 0.1,
"maxTokens": 8000,
"effortLevel": 50
},
"analysis": {
"cardinalityThreshold": 20,
"sampleSize": 10,
"includeStatistics": true,
"includePatternAnalysis": true,
"convergence": {
"maxIterations": 50,
"stabilityWindow": 2,
"confidenceThreshold": 0.85
},
"backpropagation": {
"enabled": true,
"maxDepth": 3
},
"sanityChecks": {
"dependencyLevel": true,
"schemaLevel": true,
"crossSchema": true
},
"sampleQueryGeneration": {
"enabled": true,
"queriesPerTable": 5,
"maxExecutionTime": 30000,
"includeMultiQueryPatterns": true,
"validateAlignment": true,
"tokenBudget": 100000,
"maxRowsInSample": 10,
"enableQueryFix": true,
"maxFixAttempts": 3,
"enableQueryRefinement": true,
"maxRefinementAttempts": 1
},
"guardrails": {
"enabled": true,
"stopOnExceeded": true,
"maxTokensPerRun": 250000,
"maxDurationSeconds": 3600,
"maxCostDollars": 50,
"maxTokensPerPhase": {
"discovery": 100000,
"analysis": 150000,
"sanityChecks": 50000
},
"maxTokensPerIteration": 50000,
"maxIterationDurationSeconds": 600,
"warnThresholds": {
"tokenPercentage": 80,
"durationPercentage": 80,
"costPercentage": 80,
"iterationTokenPercentage": 85,
"phaseTokenPercentage": 85
}
},
"relationshipDiscovery": {
"enabled": true,
"triggers": {
"runOnMissingPKs": true,
"runOnInsufficientFKs": true,
"fkDeficitThreshold": 0.4
},
"tokenBudget": {
"ratioOfTotal": 0.4
},
"confidence": {
"primaryKeyMinimum": 0.7,
"foreignKeyMinimum": 0.6,
"llmValidationThreshold": 0.8
},
"sampling": {
"maxRowsPerTable": 1000,
"valueOverlapSampleSize": 100,
"statisticalSignificance": 100,
"compositeKeyMaxColumns": 3
},
"patterns": {
"primaryKeyNames": ["^id$", ".*_id$", "^pk_.*", ".*_key$"],
"foreignKeyNames": [".*_id$", ".*_fk$", "^fk_.*"]
},
"llmValidation": {
"enabled": true,
"batchSize": 10
},
"backpropagation": {
"enabled": true,
"maxIterations": 5
}
}
},
"output": {
"stateFile": "./db-doc-state.json",
"outputDir": "./output",
"sqlFile": "./output/add-descriptions.sql",
"markdownFile": "./output/database-documentation.md"
},
"schemas": {
"exclude": ["sys", "INFORMATION_SCHEMA"]
},
"tables": {
"exclude": ["sysdiagrams", "__MigrationHistory"]
}
}PostgreSQL Configuration
{
"version": "1.0.0",
"database": {
"provider": "postgresql",
"host": "localhost",
"port": 5432,
"database": "mydatabase",
"user": "postgres",
"password": "YourPassword",
"ssl": false
},
"ai": {
"provider": "openai",
"model": "gpt-4-turbo-preview",
"apiKey": "sk-...",
"temperature": 0.1,
"maxTokens": 8000
},
"analysis": {
"cardinalityThreshold": 20,
"sampleSize": 10,
"includeStatistics": true,
"guardrails": {
"enabled": true,
"maxTokensPerRun": 250000
}
},
"output": {
"stateFile": "./db-doc-state.json",
"outputDir": "./output",
"sqlFile": "./output/add-descriptions.sql",
"markdownFile": "./output/database-documentation.md"
},
"schemas": {
"exclude": ["pg_catalog", "information_schema"]
}
}MySQL Configuration
{
"version": "1.0.0",
"database": {
"provider": "mysql",
"host": "localhost",
"port": 3306,
"database": "mydatabase",
"user": "root",
"password": "YourPassword"
},
"ai": {
"provider": "openai",
"model": "gpt-4-turbo-preview",
"apiKey": "sk-...",
"temperature": 0.1,
"maxTokens": 8000
},
"analysis": {
"cardinalityThreshold": 20,
"sampleSize": 10,
"includeStatistics": true,
"guardrails": {
"enabled": true,
"maxTokensPerRun": 250000
}
},
"output": {
"stateFile": "./db-doc-state.json",
"outputDir": "./output",
"sqlFile": "./output/add-descriptions.sql",
"markdownFile": "./output/database-documentation.md"
},
"schemas": {
"exclude": ["mysql", "information_schema", "performance_schema", "sys"]
}
}Supported AI Providers
DBAutoDoc integrates with MemberJunction's AI provider system, supporting:
OpenAI
{
"provider": "OpenAILLM",
"model": "gpt-4-turbo-preview",
"apiKey": "sk-..."
}Anthropic
{
"provider": "AnthropicLLM",
"model": "claude-3-5-sonnet-20241022",
"apiKey": "sk-ant-..."
}{
"provider": "GoogleLLM",
"model": "gemini-1.5-pro",
"apiKey": "..."
}Groq
{
"provider": "GroqLLM",
"model": "llama-3.3-70b-versatile",
"apiKey": "gsk_..."
}Other Providers
Any BaseLLM-compatible provider registered with MemberJunction can be used.
State File
The db-doc-state.json file tracks:
- All schemas, tables, and columns
- Description iterations with reasoning and confidence
- Analysis runs with metrics (tokens, cost, duration)
- Processing logs for debugging
- Relationship discovery results (primary keys, foreign keys)
- Guardrail metrics (phase and iteration budgets)
Iteration Tracking
Each description has an iteration history:
{
"descriptionIterations": [
{
"description": "Initial hypothesis...",
"reasoning": "Based on column names...",
"generatedAt": "2024-01-15T10:00:00Z",
"modelUsed": "gpt-4",
"confidence": 0.75,
"triggeredBy": "initial"
},
{
"description": "Revised hypothesis...",
"reasoning": "Child table analysis revealed...",
"generatedAt": "2024-01-15T10:05:00Z",
"modelUsed": "gpt-4",
"confidence": 0.92,
"triggeredBy": "backpropagation",
"changedFrom": "Initial hypothesis..."
}
]
}Programmatic Usage
DBAutoDoc can be used as a library with a comprehensive programmatic API:
Simple API (Recommended)
import { DBAutoDocAPI } from '@memberjunction/db-auto-doc';
const api = new DBAutoDocAPI();
// Analyze database
const result = await api.analyze({
database: {
provider: 'sqlserver',
host: 'localhost',
database: 'MyDB',
user: 'sa',
password: 'password'
},
ai: {
provider: 'OpenAILLM',
model: 'gpt-4-turbo-preview',
apiKey: 'sk-...'
},
analysis: {
convergence: { maxIterations: 10 },
guardrails: { maxTokensPerRun: 100000 }
},
output: {
outputDir: './output'
},
onProgress: (message, data) => {
console.log(message, data);
}
});
// Resume from state file
const resumed = await api.resume('./db-doc-state.json', {
analysis: {
convergence: { maxIterations: 20 }
}
});
// Export documentation
const exported = await api.export('./db-doc-state.json', {
formats: ['sql', 'markdown', 'html', 'csv', 'mermaid'],
outputDir: './docs',
applyToDatabase: true
});
// Get analysis status
const status = await api.getStatus('./db-doc-state.json');
console.log('Progress:', status.progress);
console.log('Tokens used:', status.metrics.totalTokens);
console.log('Estimated cost:', status.metrics.estimatedCost);Advanced API (Full Control)
import {
ConfigLoader,
DatabaseConnection,
Introspector,
TopologicalSorter,
StateManager,
PromptEngine,
AnalysisEngine,
GuardrailsManager,
SQLGenerator,
MarkdownGenerator,
HTMLGenerator,
CSVGenerator,
MermaidGenerator
} from '@memberjunction/db-auto-doc';
// Load config
const config = await ConfigLoader.load('./config.json');
// Connect to database
const db = new DatabaseConnection(config.database);
await db.connect();
// Introspect
const driver = db.getDriver();
const introspector = new Introspector(driver);
const schemas = await introspector.getSchemas(config.schemas, config.tables);
// Initialize analysis components
const promptEngine = new PromptEngine(config.ai, './prompts');
await promptEngine.initialize();
const stateManager = new StateManager(config.output.stateFile);
const state = stateManager.createInitialState(config.database.database, config.database.server);
state.schemas = schemas;
const guardrails = new GuardrailsManager(config.analysis.guardrails);
const iterationTracker = new IterationTracker();
// Run analysis
const analysisEngine = new AnalysisEngine(config, promptEngine, stateManager, iterationTracker);
// ... custom analysis workflow
// Generate outputs
const sqlGen = new SQLGenerator();
const sql = sqlGen.generate(state, { approvedOnly: false });
const mdGen = new MarkdownGenerator();
const markdown = mdGen.generate(state);
const htmlGen = new HTMLGenerator();
const html = htmlGen.generate(state, { confidenceThreshold: 0.7 });
const csvGen = new CSVGenerator();
const { tables, columns } = csvGen.generate(state);
const mermaidGen = new MermaidGenerator();
const erdDiagram = mermaidGen.generate(state);
const erdHtml = mermaidGen.generateHtml(state);Cost Estimation
Typical costs (will vary by database size and complexity):
| Database Size | Tables | Iterations | Tokens | Cost (GPT-4) | Cost (Groq) | |---------------|--------|------------|--------|--------------|-------------| | Small | 10-20 | 2-3 | ~50K | $0.50 | $0.02 | | Medium | 50-100 | 3-5 | ~200K | $2.00 | $0.08 | | Large | 200+ | 5-8 | ~500K | $5.00 | $0.20 | | Enterprise | 500+ | 8-15 | ~1.5M | $15.00 | $0.60 |
With Relationship Discovery: Add 25-40% to token/cost estimates for databases with missing constraints.
With Sample Query Generation (5 queries/table, GPT-4o):
| Database Size | Tables | Additional Tokens | Additional Cost | |---------------|--------|-------------------|-----------------| | Small | 10-20 | ~100K | $0.50-1.00 | | Medium | 50-100 | ~500K | $2.50-5.00 | | Large | 200+ | ~2M | $10-20 |
Note: Sample query generation uses ~6× more API calls than description generation (planning + individual SQL generation for each query), adding ~50% to total token usage.
Guardrails help control costs by setting hard limits on token usage and runtime.
Best Practices
- Start with guardrails - Set reasonable token/cost limits to avoid surprises
- Add seed context - Helps AI understand database purpose and domain
- Review low-confidence items - Focus manual effort where AI is uncertain
- Use backpropagation - Improves accuracy significantly
- Enable relationship discovery - For legacy databases missing constraints
- Filter exports - Use
--confidence-thresholdto only apply high-confidence descriptions - Iterate - Run analysis multiple times if first pass isn't satisfactory
- Resume from checkpoints - Save costs by continuing previous runs
- Use appropriate models - Balance cost vs. quality (GPT-4 vs. Groq)
- Export multiple formats - HTML for browsing, CSV for analysis, SQL for database
Sample Query Generation Best Practices
- Use GPT-4o or Claude 3.5 - Best balance of quality, speed, and cost
- Set token budget - Prevents runaway costs (default: 100K tokens)
- Start with 5 queries/table - Good balance of coverage and cost
- Enable alignment validation - Ensures related queries use consistent logic
- Enable query fix - Auto-fixes failed queries (up to 3 attempts by default)
- Enable query refinement - LLM analyzes results to improve query quality
- Review generated queries - Verify SQL correctness before using for training
- Use for few-shot prompting - Include in AI agent system prompts as examples
- Generate separately - Use
generate-queriescommand on existing state to avoid re-running full analysis - Export to metadata - Use
export-sample-queriesto sync queries to MemberJunction - Focus on complex tables - Skip simple lookup tables to save costs
- Validate execution - Enable
maxExecutionTimeto test queries run successfully - Document patterns - Use generated queries to document common query patterns for your domain
Troubleshooting
"Connection failed"
- Check server, database, user, password in config
- Verify database server is running and accessible
- Check firewall rules and network connectivity
- For PostgreSQL: verify SSL settings
- For MySQL: check port and authentication method
"Analysis not converging"
- Increase
maxIterationsin config - Lower
confidenceThreshold - Add more seed context
- Check warnings in state file for specific issues
- Review guardrail limits (may be hitting token budget)
"High token usage"
- Enable guardrails with appropriate limits
- Reduce
maxTokensper prompt - Filter schemas/tables to focus on subset
- Use cheaper model (Groq instead of GPT-4)
- Disable relationship discovery if not needed
"Guardrail limits exceeded"
- Review metrics in state file
- Adjust limits upward if budget allows
- Use
--resumeto continue from checkpoint - Focus on specific schemas/tables
- Reduce iteration count
"Relationship discovery not finding keys"
- Check confidence thresholds (may be too high)
- Review statistical significance settings
- Enable LLM validation for better accuracy
- Check naming patterns configuration
- Verify sample size is adequate
Documentation
Comprehensive documentation is available in the docs/ folder:
- USER_GUIDE.md - Complete user documentation
- ARCHITECTURE.md - Technical architecture and design
- API_USAGE.md - Programmatic API examples
- GUARDRAILS.md - Guardrails system documentation
- CHANGES.md - Recent changes and enhancements
Architecture
DBAutoDoc uses a sophisticated multi-phase architecture:
- Discovery Phase - Introspection and optional relationship discovery
- Analysis Phase - Iterative LLM-based description generation
- Sanity Check Phase - Validation and quality assurance
- Export Phase - Multi-format documentation generation
See ARCHITECTURE.md for comprehensive architecture documentation, including:
- Phase flow diagrams
- Extension points for customization
- Database driver development guide
- LLM intelligence strategy
Contributing
DBAutoDoc is part of the MemberJunction project. Contributions welcome!
License
MIT
Demo Databases
LousyDB - Legacy Database Demo
Located in /Demos/LousyDB/, this demo showcases Relationship Discovery capabilities on a realistic legacy database:
- ❌ Zero metadata - No PK or FK constraints defined
- 🔤 Cryptic naming - Short abbreviations (
cst,ord,pmt) - 🔡 Single-char codes - Undocumented status values (
A,T,P) - 💔 Data quality issues - Orphaned records, NULLs, duplicates
- 📊 20 tables across 2 schemas with 1000+ rows
Perfect for testing DBAutoDoc's ability to reverse-engineer poorly-documented databases.
See /Demos/LousyDB/README.md for details and testing instructions.
Links
- GitHub: https://github.com/MemberJunction/MJ
- Documentation: https://docs.memberjunction.org
- Support: https://github.com/MemberJunction/MJ/issues
