aids-remote-test
v0.2.2
Published
AI Delivery System - Test version for remote mode validation
Maintainers
Readme
AI Baseline Generator
Advanced CLI tool that generates standardized .ai/ baseline structures for software projects using GPT-5.2 with optimized reasoning modes, JSON Schema validation, and prompt caching.
Overview
The AI Baseline Generator creates standardized .ai/ baseline structures for software projects by reading templates from .ai.template/ and adapting them to your repository. It uses GPT-5.2 with advanced reasoning modes and structured outputs.
Key Features
- GPT-5.2 Integration - Latest model with 400k context window, reasoning modes, and structured outputs
- JSON Schema Validation - File selection uses strict JSON schemas for guaranteed output format
- Reasoning Modes - Optimized
lowmode for file selection,highmode for content generation - Prompt Caching - 90% cost reduction on cached system prompts and examples
- Remote Mode (SaaS) - Optional hosted API for testing without OpenAI key setup
- Template-based - Reads from
.ai.template/source of truth in the main repository - Technology-agnostic - Works across all programming languages and frameworks
- Safe & Validated - Multi-stage validation prevents hallucinations and ensures quality
Quick Start
New here? See QUICKSTART.md for a 5-minute setup guide.
TL;DR
Local Mode (Bring Your Own Key):
# 1. Install and build
npm install
# 2. Set API key
echo "OPENAI_API_KEY=your-key" > .env
# 3. Use it!
./aids init /path/to/your/project # For existing projects
./aids init ./my-new-project # For new projects (auto-detects)Remote Mode (No API Key Required):
# 1. Install and build
npm install
# 2. Point to hosted API
export AIDS_API_URL=https://your-api-url
# 3. Use it!
./aids init /path/to/your/projectThat's it! The ./aids command auto-builds on first run and works from anywhere.
For remote mode setup and deployment, see docs/REMOTE_MODE.md.
For detailed documentation, continue reading below or check QUICKSTART.md.
Installation
npm installThis automatically builds the project via postinstall script.
Usage
Single Command: init
The init command automatically detects whether you're working with an existing project or creating a new one.
For Existing Projects
Add AI Delivery System to an existing repository:
# Simple usage
./aids init /path/to/your/project
# With custom name and language
./aids init /path/to/your/project --name "Your Project" --lang pl
# Or using npm scripts (alternative)
npm run init /path/to/your/projectWhat it does:
- Scans your repository structure
- Uses GPT-5.2 to analyze your codebase
- Generates personalized
.ai/context files - Adapts to your tech stack automatically
For New Projects
Create a new project with AI Delivery System:
# Create new project (auto-detects empty directory)
./aids init ./my-new-project
# With description
./aids init ./my-new-project --description "A task management API"
# Choose IDE format
./aids init ./my-new-project --ide cursorWhat it does:
- Creates
.ai/system structure - Adds README with instructions
- Sets up
/bootstrapcommand - Ready for guided project setup
Next step: Open the project in your AI IDE and run /bootstrap to:
- Choose your tech stack interactively
- Create project structure step-by-step
- Generate dependencies and configuration
- Fill in all context files automatically
Dry Run (Preview Only)
./aids init /path/to/your/project --dry-runOptions
For init command:
[path]- Path to project (default: current directory)--name <name>- Project name (optional, defaults to directory name)--description <desc>- Project description (for new projects)--lang <language>- Language for docs:ploren(default:en)--ide <target>- Target IDE:generic,cursor,windsurf,claude(default:generic)--dry-run- Preview changes without writing files--verbose- Verbose output
Environment Variables
Local Mode:
# Required
export OPENAI_API_KEY=your-api-key
# Optional
export OPENAI_MODEL=gpt-5.2 # Latest GPT-5.2 model
export OPENAI_BASE_URL=https://api.openai.com/v1 # Custom base URLRemote Mode (SaaS):
# Required
export AIDS_API_URL=https://your-api-url # Hosted backend API
# OpenAI key NOT required in remote modeNote: The generator automatically uses optimized reasoning modes:
reasoning_effort: 'low'for file selection (fast, cost-effective)reasoning_effort: 'high'for context generation (deep reasoning, quality)
Remote Mode (SaaS Deployment)
The tool supports an optional remote mode where generation happens on a hosted backend instead of locally. This is useful for:
- Testing the tool without setting up an OpenAI API key
- Centralized cost management and rate limiting
- Easier onboarding for new users
Quick Start:
# 1. Start the backend (in one terminal)
npm run server
# 2. Use CLI with remote API (in another terminal)
export AIDS_API_URL=http://localhost:3000
./aids init /path/to/your/projectFor production deployment and ngrok setup, see: docs/REMOTE_MODE.md
The remote mode includes:
- Express-based API server
- Rate limiting (10 req/IP/hour)
- Gzip compression
- Same output quality as local mode
Architecture
Two Flows
Existing Project Flow
Scan Repo → LLM File Selection (GPT-5.2 low reasoning) → Extract Snippets → Generate Context (GPT-5.2 high reasoning) → Validate → WriteFlow:
- Scan Repository - Analyze directory structure and files
- LLM File Selection - GPT-5.2 selects 30-80 high-signal files
- Extract Snippets - Read selected files with secret masking
- Generate Context - GPT-5.2 creates personalized documentation
- Validate - Multi-stage validation (structure, links, secrets, size)
- Write - Atomic write with backup
New Project Flow
Detect Empty Directory → Copy .ai/ Template → Create Empty Context Files → Ready for /bootstrapFlow:
- Detect Empty - Recognize new project (no code files)
- Copy Template - Copy
.ai.template/structure - Create Placeholders - Generate empty context files with instructions
- Ready - User runs
/bootstrapin AI IDE to set up project step-by-step
GPT-5.2 Integration
File Selection Phase:
- Model:
gpt-5.2withreasoning_effort: 'low' - Input: Full repository structure (up to 400k tokens)
- Output: JSON Schema validated file list (30-80 files)
- Validation: Strict JSON Schema prevents hallucinations
Context Generation Phase:
- Model:
gpt-5.2withreasoning_effort: 'high' - Input: Selected file snippets + templates from
.ai.template/ - Output: Comprehensive documentation (up to 128k tokens)
- Caching: System prompts cached for 90% cost reduction
Template System
Reads from .ai.template/ (source of truth) in the main repository:
- Context examples -
architecture.md,styleguide.md,security.md, etc. - Static templates -
rules.md,README.md, workflow templates - Prompt templates - System and user prompts optimized for GPT-5.2
Technology-Agnostic Heuristics
Classifies files without knowing the tech stack:
- Docs:
README*,docs/**,*.md - CI:
.github/**,.gitlab-ci.yml,Jenkinsfile - Infra:
docker*,k8s/**,terraform/**,helm/** - Schema:
migrations/**,*.sql,*.proto,*.graphql - Config:
package.json,requirements.txt,Cargo.toml,Makefile
Validation
Four-stage validation:
- Structure - All required files present
- Links - All paths point to real files
- Secrets - No API keys or tokens leaked
- Size - Max 300 lines per markdown file
Auto-retry: If validation fails, LLM gets feedback and retries once.
Write System
- Backup: Creates
.ai.bak-YYYYMMDD-HHMMSSif.ai/exists - Atomic: Writes to temp dir, then renames (rollback on error)
- Dry-run: Preview changes without writing
Output Structure
.ai/
├── README.md
├── rules.md
├── scope.md
├── review-contract.md
├── failure-modes.md
├── entrypoints.md
├── context/
│ ├── repo-map.md # Repository structure & conventions
│ ├── data-sources.md # APIs, databases, data models
│ ├── architecture.md # System architecture & patterns
│ ├── styleguide.md # UI/UX design system
│ └── security.md # Authentication & security
└── handoff/
├── README.md
└── templates/
├── context.template.md
├── plan.template.md
└── tasks.template.mdGenerated Context Files
repo-map.md- Repository structure, entry points, and organizational patternsdata-sources.md- API contracts, database schemas, and data flowarchitecture.md- System architecture, layered design, and infrastructurestyleguide.md- UI components, design tokens, and user experience patternssecurity.md- Authentication, authorization, and security considerations
All files are generated using GPT-5.2 with high reasoning effort and validated against templates from .ai.template/.
Examples
Create a new project
# Create new project directory
./aids init ./my-awesome-app --description "A modern web application"
# Open in your AI IDE (Cursor, Windsurf, Claude Code)
cd my-awesome-app
# Run /bootstrap command in AI assistant
# Follow interactive prompts:
# - What are you building? "A task management API with real-time updates"
# - Who is the target user? "Mobile app developers"
# - Tech preferences? "TypeScript, need WebSocket support"
# - Expected scale? "MVP for 1000 users"
#
# AI will recommend stack (e.g., Node.js + Express + Socket.io + PostgreSQL)
# Creates project structure step-by-step
# Fills in all .ai/context/ filesAdd AI system to existing TypeScript project
./aids init /path/to/my-app --name "E-commerce Platform" --lang enAdd AI system to Go project (dry-run)
./aids init /path/to/go-service --name "Payment Service" --lang pl --dry-runAdd AI system to Python project
./aids init /path/to/ml-pipeline --name "ML Training Pipeline" --lang enCreate project for specific IDE
# For Cursor
./aids init ./my-project --ide cursor
# For Windsurf
./aids init ./my-project --ide windsurf
# For Claude Code
./aids init ./my-project --ide claudeError Handling
Rate Limits
The generator handles OpenAI rate limits with exponential backoff (2 retries).
Validation Failures
If validation fails after auto-retry, the generator exits with errors. Common issues:
- Hallucinated paths: LLM mentioned non-existent files
- Fix: Evidence pack may be too small, or LLM needs better prompt
- Secrets detected: API keys or tokens in output
- Fix: Check secret masking in evidence pack
- Missing files: Required files not generated
- Fix: LLM output parsing issue
No High-Signal Files
If the repo is very minimal (< 10 files), the generator will:
- Generate a minimal baseline
- Add a note in
context/repo-map.md - Still create the full structure
Development
Build
npm run buildThe ./aids script automatically builds on first run if dist/ is missing.
Run
# Using aids executable (recommended)
./aids init --help
./aids create --help
# Or using npm scripts
npm run init -- --help
npm run create -- --helpProject Structure
src/
├── cli.ts # CLI entry point
├── config.ts # Config loader
├── types.ts # TypeScript types
├── init/ # Initialization (init command)
│ ├── index.ts # Exports
│ ├── detectProjectType.ts # Auto-detect existing vs new
│ ├── initExistingProject.ts # Flow for existing projects
│ └── initNewProject.ts # Flow for new projects
├── adapters/ # IDE-specific adapters
│ ├── BaseAdapter.ts # Base adapter class
│ ├── GenericAdapter.ts # Generic .ai/ format
│ ├── CursorAdapter.ts # Cursor IDE format
│ ├── WindsurfAdapter.ts # Windsurf IDE format
│ ├── ClaudeCodeAdapter.ts # Claude Code format
│ └── types.ts # Adapter types
├── llm/ # GPT-5.2 integration
│ ├── llmConfig.ts # Model configs & reasoning modes
│ └── fileSelectionSchema.ts # JSON Schema for file selection
├── generation/ # Context generation pipeline
│ ├── generateProjectContext.ts # Main orchestrator
│ ├── generateProjectContextFile.ts # Individual file generation
│ ├── selectContextFilesChunked.ts # File selection with GPT-5.2
│ ├── loadContextExamples.ts # Loads examples from .ai.template/
│ └── prompts/ # GPT-5.2 optimized prompts
│ ├── repo-map/ # Repo map generation
│ ├── architecture/ # Architecture docs
│ ├── data-sources/ # Data sources docs
│ ├── styleguide/ # Style guide docs
│ └── security/ # Security docs
├── repo/ # Repository analysis
│ ├── buildSnapshot.ts # Full repo snapshot
│ ├── buildTree.ts # Directory tree structure
│ ├── classifyFiles.ts # File type classification
│ ├── detectZonesAndTech.ts # Tech stack detection
│ ├── extractSnippets.ts # File content extraction
│ ├── scanRepo.ts # Repository scanner
│ └── ignore.ts # Gitignore processing
├── standards/ # Organization standards
│ ├── loadOrganizationStandards.ts
│ ├── mergeWithStandards.ts
│ └── transformForIDE.ts # IDE-specific transformations
├── validate/ # Multi-stage validation
│ ├── validateStructure.ts # Required files check
│ ├── validateLinks.ts # Path validation
│ ├── detectSecrets.ts # Secret detection
│ └── validateSize.ts # Size limits
├── write/ # Safe file writing
│ ├── applyWrites.ts # Atomic writes
│ ├── backup.ts # Backup system
│ ├── planWrites.ts # Write planning
│ └── dryRun.ts # Preview mode
└── utils/ # Utilities
├── logger.ts # Structured logging
├── errors.ts # Error handling
└── fs.ts # File operationsDesign Principles
1. Template-Driven Generation
All output is based on templates from .ai.template/ source of truth. The generator adapts templates to your repository rather than creating from scratch.
2. GPT-5.2 Optimized
Uses latest GPT-5.2 capabilities:
- Structured Outputs - JSON Schema validation prevents hallucinations
- Reasoning Modes - Optimized
low/higheffort for different tasks - Prompt Caching - 90% cost reduction on cached content
- 400k Context - Full repository analysis when needed
3. Technology Agnostic
Works across all programming languages and frameworks using heuristic-based file classification and pattern matching.
4. Multi-Stage Validation
Four-stage validation ensures quality:
- Structure - All required files present
- Links - All paths point to real files (prevents hallucinations)
- Secrets - No API keys or tokens leaked
- Size - Reasonable file sizes for readability
5. Safe by Default
- Dry-run mode - Preview changes without writing
- Atomic writes - All-or-nothing file operations
- Automatic backups - Never lose existing data
- Rollback support - Easy recovery from failures
Troubleshooting
"OPENAI_API_KEY environment variable is required"
export OPENAI_API_KEY=your-api-key"Model not found: gpt-5.2"
GPT-5.2 should be available with your OpenAI API key. If not, check:
# List available models
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models | jq '.data[].id' | grep gpt
# Use fallback model
export OPENAI_MODEL=gpt-4o"JSON Schema validation error"
The generator uses strict JSON Schema for file selection. If you see schema errors:
- Missing required fields: GPT-5.2 should guarantee schema compliance
- Invalid file paths: Check that repository structure is accessible
- Reasoning mode issues: Try different reasoning_effort settings
"Validation failed after retry"
Check the error messages. Common issues:
- LLM mentioned files that don't exist in the repo
- Evidence pack too small (repo is very minimal)
Try increasing verbosity:
./aids init . --name "X" --lang pl --verbose --dry-run"Rate limit exceeded after retries"
GPT-5.2 has generous rate limits, but if exceeded:
# Wait and retry
sleep 60 && ./aids init /path/to/project
# Or use different API key
export OPENAI_API_KEY=your-other-keyPerformance Optimization
The generator uses several optimizations:
- Prompt caching: System prompts cached for 90% cost reduction
- Reasoning modes:
lowfor fast file selection,highfor quality content - Structured outputs: JSON Schema validation prevents retries
- Selective context: Only 30-80 high-signal files sent to LLM
Cost Estimation
Approximate costs per generation (GPT-5.2 pricing):
- File selection: ~$0.02 (cached prompts reduce to ~$0.002)
- Context generation: ~$0.50-1.00 per file (5 files total)
- Total per repository: ~$3-5 (first run), ~$1-2 (subsequent with caching)
License
MIT
Contributing
See main repository README for contribution guidelines.
