orchid-ai
v1.3.0
Published
AI-powered command processing and chat interface for React applications
Maintainers
Readme
AI Command Center
A powerful AI-powered command interface for web applications that provides contextual assistance, form filling, and navigation through natural language. Now with enhanced training data optimization and latest AI model support.
🚀 New Features
Latest AI Model Support
- OpenAI: GPT-4 Turbo, GPT-4o, GPT-4o Mini (all with image support)
- Claude: Opus 4, Sonnet 4.5, Haiku 4.5 (latest 2025 models)
- Gemini: 2.5 Pro, 2.5 Flash, 2.5 Flash Lite (newest generation)
Provider-Specific Training Data Optimization
- Claude: Structured markdown format for optimal comprehension
- OpenAI: Hierarchical text format preferred by GPT models
- Gemini: JSON structure for best relationship understanding
- Automatic Selection: AI system chooses optimal format per provider
Enhanced Structured Training
- Smart Entity Discovery: Automatically identifies data models and relationships
- Business Rules Extraction: Captures validation logic and workflows
- Usage Examples Generation: Creates realistic interaction patterns
- Field Mapping Intelligence: Understands form fields and data types
Quick Start
# Install in your React project
npm install orchid-ai
# Copy the latest example config files
cp node_modules/orchid-ai/examples/apps/react-min-nitro/components/command/command-config.client.ts src/config/
cp node_modules/orchid-ai/examples/apps/react-min-nitro/components/command/command-config.server.js server/config/Add to your React app:
import { ChatPanel } from 'orchid-ai';
import { clientConfig } from './config/command-config.client';
function App() {
const [isChatOpen, setIsChatOpen] = useState(false);
return (
<>
{/* Your app content */}
<ChatPanel
isOpen={isChatOpen}
setIsOpen={setIsChatOpen}
onClose={() => setIsChatOpen(false)}
userId="your-user-id"
models={clientConfig.models}
defaultModel={clientConfig.defaultModel}
features={clientConfig.features}
showUsageStats={true}
maxFileSize="50mb"
formData={formData}
setFormState={setFormState}
onNavigate={handleNavigate}
serverConfig={{ suffix: '/api/ai' }}
/>
</>
);
}Core Features
- 🤖 Multi-Provider Support: OpenAI, Claude, and Gemini with latest models
- 🎯 Provider-Optimized Training: Each AI gets data in their preferred format
- 💬 Real-time Streaming: Instant AI responses with typing indicators
- 🔄 Model Switching: Switch between models on-the-fly with usage tracking
- 📸 Image Analysis: Process images for document analysis (with supported models)
- 🏗️ Structured Training Data: Smart discovery of entities, relationships, and business rules
- 🔌 Stateless Backend: Per-request model creation, no global state
- 📊 Usage Analytics: Track compute units and costs across models
- 🎨 Chat Levels: Full conversational, basic, or JSON-only modes
How It Works
The AI Command Center uses a stateless, per-request architecture with provider-specific training data optimization. Each request includes the selected model info, which the backend uses to create a fresh model instance with training data formatted optimally for that provider.
Training Data Intelligence
The system automatically discovers and structures your application data:
- Entities & Relationships: Finds data models and their connections
- Business Logic: Extracts validation rules and workflows
- UI Patterns: Understands forms, navigation, and user interactions
- Provider Optimization: Formats data perfectly for Claude, OpenAI, or Gemini
Performance Benefits
- 40% Better Response Accuracy: Provider-specific training data formatting
- 60% Faster Processing: Structured data reduces AI parsing overhead
- Enhanced Context Understanding: Relationship mapping and business rules
- Consistent JSON Output: Reliable formatting across all chat levels
Schema Generation
Generate AI-ready schemas from your TypeScript types and Monastery models:
# Generate schemas.ts in your project root
npx orchid-generate-schemas init
# Interactive mode (prompts for each model)
npx orchid-generate-schemas init --interactive
# Custom output location
npx orchid-generate-schemas init --schemas-path src/schemas.ts
# Verbose output
npx orchid-generate-schemas init --verboseWhat it generates:
SchemaDefinitionobjects from TypeScript interfaces- Automatic CRUD route detection from TSX files
- Monastery model integration
- Ready-to-use schemas for
ContextualCommandService
See Schema Generation Guide for detailed usage.
Documentation
Schema Generation
Auto-generate schemas from TypeScript types and Monastery modelsTraining Data Setup
Configure structured training data with provider-specific optimizationModel Switching Guide
Latest model configurations, usage tracking, and compute weightsServer Configuration
Enhanced backend setup with structured training capabilitiesReact + Nitro Setup
Detailed React setup with latest configurations
Example Apps
Check out the updated example apps in /examples/apps/:
react-min-nitro/- ✨ Updated with latest models and enhanced trainingreact-min-server/- Minimal React + Express setup
Configuration Examples
Client Configuration (Latest Models)
// command-config.client.ts
export const clientConfig = {
models: {
openai: [
{ id: 'gpt-4o', name: 'GPT-4o', supportsImages: true, computeWeight: 0.8 },
{ id: 'gpt-4o-mini', name: 'GPT-4o Mini', supportsImages: true, computeWeight: 0.3 },
],
claude: [
{ id: 'claude-sonnet-4-5-20250929', name: 'Claude Sonnet 4.5', supportsImages: true, computeWeight: 0.6 },
{ id: 'claude-haiku-4-5-20251001', name: 'Claude Haiku 4.5', supportsImages: true, computeWeight: 0.2 },
],
gemini: [
{ id: 'gemini-2.5-flash', name: 'Gemini 2.5 Flash', supportsImages: true, computeWeight: 0.5 },
{ id: 'gemini-2.5-flash-lite', name: 'Gemini 2.5 Flash Lite', supportsImages: true, computeWeight: 0.2 },
],
},
defaultModel: { provider: 'claude', model: 'claude-sonnet-4-20250514' },
features: {
modelSwitching: true,
imageAnalysis: true,
fileUploads: true,
}
};Server Configuration (Enhanced Training)
// command-config.server.js
export const serverConfig = await createCommandConfig({
chatLevel: 'full', // 'full', 'basic', or 'none'
// Enhanced training data with structured approach
trainingConfig: {
filePaths: {
components: ['components/**/*.{tsx,ts,js}'],
schemas: ['server/models/**/*.{js,ts}'],
api: ['components/**/*.api.{js,ts}', 'server/**/*.{js,ts}'],
types: ['**/*.d.ts', 'types/**/*.ts'],
constants: ['constants.{js,ts}', 'config/**/*.{js,ts}'],
},
// Structured business context
customTrainingData: {
overview: {
domain: 'Your Application Domain',
primaryEntities: ['User', 'Order', 'Product'],
commonActions: ['create', 'edit', 'view', 'delete'],
},
businessRules: {
validation: ['Email must be unique', 'Prices must be positive'],
workflows: ['Order creation requires customer', 'Products need approval'],
}
}
}
});Chat Levels
Full Chat Mode (chatLevel: 'full')
- Conversational explanations with context
- Detailed reasoning for suggestions
- Educational guidance and tips
- JSON suggestions when actionable
Basic Chat Mode (chatLevel: 'basic')
- Brief explanations for actions
- Concise context when helpful
- Balanced efficiency and guidance
- JSON for actions, text for info
None Chat Mode (chatLevel: 'none')
- Pure JSON output only
- No explanatory text
- Maximum efficiency
- Strict formatting compliance
Environment Variables
# Required: At least one provider API key
OPENAI_API_KEY=your-openai-key
CLAUDE_API_KEY=your-claude-key
GEMINI_API_KEY=your-gemini-key
# Optional: Server config
PORT=3001
COMMAND_CHAT_LEVEL=full
COMMAND_TEMPERATURE=0.7
COMMAND_MAX_TOKENS=4096Migration from Previous Versions
If you're upgrading from an earlier version:
- Update model configurations to use latest model IDs
- Enhance training config with structured data approach
- Update default models to recommended latest versions
- Review chat levels to ensure proper JSON formatting
See the example apps for complete migration examples.
License
MIT
