@catalystlabs/tryai
v0.5.1
Published
Dead simple AI library. One line setup. Zero config. Just works.
Downloads
25
Maintainers
Readme
TryAI
Dead simple AI library. One line setup. Zero config. Just works.
Built on battle-tested providers from production systems.
⚠️ Important: Semantic caching requires deploying your own Modal embedding service. Without it, the library still works perfectly with exact-match caching. See Configuration for details.
import { ai } from '@catalystlabs/tryai';
const response = await ai('Hello world');
console.log(response.content);That's it. You're using AI.
Install
npm install @catalystlabs/tryai
# Run the setup wizard to configure your environment
npx @catalystlabs/tryai setupFeatures
- Zero Config - Just works with environment variables
- Simple - One function to rule them all
- Providers - OpenAI, Anthropic, Llama 4.0, LM Studio (local)
- Conversation Management - Built-in state management with @xstate/store
- Prompt Templates - TOML-based, simple variable replacement
- Pipelines - Chain prompts together with context
- De-identification - Remove PII automatically
- Streaming - Built-in streaming support
- TypeScript - Full type safety
- Intelligent Routing - Automatic model selection based on task complexity
Quick Start
Simplest Usage
import { ai } from '@catalystlabs/tryai';
// Just works (uses OPENAI_API_KEY from env)
const response = await ai('What is 2+2?');
console.log(response.content); // "4"Streaming
for await (const chunk of ai.stream('Tell me a story')) {
process.stdout.write(chunk.content);
}Different Models
// Use GPT-4
await ai.model('gpt-4').send('Complex question here');
// Use GPT-3.5 (cheaper)
await ai.model('gpt-3.5-turbo').send('Simple question');
// Use Claude
import { providers } from '@catalystlabs/tryai';
const claude = providers.claude();
await claude.send('Hello Claude!');Prompt Templates
Define in Code
import { definePrompts, prompt } from '@catalystlabs/tryai';
definePrompts({
greeting: 'Say hello to {{name}} in {{language}}',
summarize: {
prompt: 'Summarize in {{words}} words: {{text}}',
options: { temperature: 0.3 }
}
});
// Use them
const response = await prompt('greeting', {
name: 'Alice',
language: 'Spanish'
});Or Use TOML Files
Create prompts.toml:
greeting = "Say hello to {{name}} in {{language}}"
[summarize]
prompt = "Summarize in {{words}} words: {{text}}"
options = { temperature = 0.3 }Then:
import { prompt } from '@catalystlabs/tryai';
const response = await prompt('greeting', {
name: 'Alice',
language: 'Spanish'
});Conversations
Built-in conversation state management:
import { forge } from '@catalystlabs/tryai';
const ai = forge();
// Simple chat - conversation managed automatically
const response1 = await ai.chat('What is 2+2?');
const response2 = await ai.chat('What was my previous question?');
// AI remembers: "Your previous question was 'What is 2+2?'"
// Streaming with conversation context
for await (const chunk of ai.chatStream('Tell me a story')) {
process.stdout.write(chunk.content);
}
// Access conversation methods
ai.conversation.addSystemMessage('You are a helpful pirate');
ai.conversation.getMessages(); // Get all messages
ai.conversation.getMessageCount(); // Count messages
ai.conversation.clear(); // Clear messages
ai.conversation.end(); // End conversation
// Export/Import conversations
const data = ai.conversation.export();
// ... save to database ...
ai.conversation.import(data);
// Subscribe to changes
ai.conversation.subscribe((snapshot) => {
console.log('Messages:', snapshot.context.messages);
});Conversation API
// Message Management
ai.conversation.start(id?: string); // Start new conversation
ai.conversation.addUserMessage(content: string); // Add user message
ai.conversation.addSystemMessage(content: string); // Add system message
ai.conversation.addAIResponse(response, time?); // Add AI response
// State Access
ai.conversation.getMessages(); // All messages
ai.conversation.getMessagesForAPI(includeSystem?); // API-ready format
ai.conversation.getLastMessage(); // Most recent message
ai.conversation.getUserMessages(); // User messages only
ai.conversation.getAssistantMessages(); // AI messages only
ai.conversation.getId(); // Conversation ID
ai.conversation.getMessageCount(); // Total messages
ai.conversation.hasConversation(); // Check if active
// State Management
ai.conversation.clear(); // Clear all messages
ai.conversation.end(); // End conversation
ai.conversation.updateSettings(settings); // Update settings
ai.conversation.setError(error); // Set error state
ai.conversation.clearError(); // Clear error
// Metadata & Settings
ai.conversation.getSettings(); // Get settings
ai.conversation.getMetadata(); // Get metadata
ai.conversation.isLoading(); // Check loading state
ai.conversation.getError(); // Get current error
// Import/Export
ai.conversation.export(); // Export to JSON
ai.conversation.import(data); // Import from JSON
// Subscriptions
ai.conversation.subscribe(callback); // Subscribe to changes
ai.conversation.destroy(); // Cleanup subscriptionsPipelines
Advanced prompt chaining with context:
import { createPipeline } from '@catalystlabs/tryai';
const pipeline = createPipeline(ai)
.prompt('List 3 random animals')
.iterate(3, 'Tell me about {{previous}}')
.transform((response, context) => response.toUpperCase())
.accumulate('results')
.run();De-identification
Automatically remove PII:
import { deidentify } from '@catalystlabs/tryai';
const response = await deidentify.ai(
'Email [email protected] at 555-1234',
{ keepMapping: true }
);
// PII is removed before sending to AI
// and restored in the responseConfiguration
Environment Variables
# AI Provider Keys (at least one required)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Modal Embedding Service (OPTIONAL - for semantic caching)
# Users must deploy their own Modal service!
# See: https://git.catalystlab.cc/catalyst/tryai/tree/main/modal_services
MODAL_EMBEDDING_ENDPOINT=https://your-modal-app.modal.runManual Setup
import { forge } from '@catalystlabs/tryai';
const ai = forge({
provider: 'openai',
apiKey: 'sk-...',
model: 'gpt-4',
temperature: 0.7
});Pre-configured Providers
import { providers } from '@catalystlabs/tryai';
const gpt4 = providers.openai(); // GPT-4
const gpt35 = providers.gpt35(); // GPT-3.5-turbo
const claude = providers.claude(); // Claude Opus
const sonnet = providers.sonnet(); // Claude Sonnet (faster)
const llama4 = providers.llama4(); // Llama 4.0 405B
const llama70b = providers.llama4_70b(); // Llama 4.0 70B
const vision = providers.llama_vision(); // Llama Vision
const local = providers.local(); // LM Studio local modelsAPI
Core Functions
ai(message, options?)- Send a messageai.stream(message, options?)- Stream a responseai.model(name)- Use a different model
Prompt Functions
prompt(name, variables?, options?)- Use a templatedefinePrompts(templates)- Define templates in codeloadPrompts(path)- Load TOML file
Chain Functions
chain()- Start a chain.step(message, options?)- Add a step.prompt(name, variables?)- Use template in chain.run()- Execute the chain
De-identification
deidentify(text, options?)- Remove PIIdeidentify.ai(message, options?)- AI call with PII removalreidentify(text)- Restore PIIclearMappings()- Clear stored mappings
Examples
Check out the /examples folder for more:
simple.ts- All features demonstratedchatbot.ts- Simple chatbotpipeline.ts- Complex processing pipeline
Philosophy
AI should be simple. This library is intentionally minimal.
No complex abstractions. No enterprise patterns. No bullshit.
Just simple functions that work.
Advanced Features
While the API is simple, the underlying providers are battle-tested and support all advanced features:
Full Message Control
// Pass complete message arrays
const response = await ai([
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello' }
]);All Provider Options
// OpenAI with all parameters
await ai('Generate text', {
model: 'gpt-4',
temperature: 0.9,
maxTokens: 100,
topP: 0.95,
frequencyPenalty: 0.5,
presencePenalty: 0.5,
stop: ['\n\n'],
responseFormat: { type: 'json_object' }
});
// Anthropic with specific options
await ai('Generate text', {
model: 'claude-3-opus-20240229',
temperature: 0.7,
topK: 40,
stopSequences: ['\n\n']
});Streaming with Proper Chunks
for await (const chunk of ai.stream('Tell a story')) {
if (chunk.content) process.stdout.write(chunk.content);
if (chunk.usage) console.log('Tokens:', chunk.usage);
}See /examples/advanced.ts for more.
Intelligent Routing Mode
Use LLMRouter for automatic model selection based on task complexity and priority:
// Zero config - just set LLMROUTER_URL in .env
const response = await ai.routed("Explain quantum computing");
console.log(response.text);
console.log(response.metadata.selectedModel); // "gpt-4" or "gpt-3.5-turbo" etc.
console.log(response.metadata.cost); // 0.0023
// Submit feedback to improve routing
await response.submitFeedback({
helpful: true,
rating: 5,
comment: "Perfect explanation"
});Priority Modes
Control the quality/cost tradeoff:
// Highest quality, regardless of cost
const premium = await ai.routed("Complex medical diagnosis", {
priority: "quality_first"
});
// Balance quality and cost (default)
const balanced = await ai.routed("Summarize this article", {
priority: "balanced"
});
// Minimize cost, accept lower quality
const budget = await ai.routed("Simple question", {
priority: "aggressive_cost"
});Configuration
# Required
LLMROUTER_URL=http://localhost:8000
# Optional
LLMROUTER_API_KEY=your-keyThe router analyzes your prompt across 7 dimensions (creativity, reasoning, etc.) and selects the optimal model based on your priority mode. It includes semantic caching for 70%+ cost savings on similar queries.
v0.3.0 Release Notes
Major Improvements
- 🔒 Enhanced Security - Path validation prevents directory traversal attacks
- 🎯 Improved Type Safety - Replaced most
anytypes with proper TypeScript interfaces - 💾 Better Resource Management - Added
close()methods for proper cleanup - 🚀 Truly Optional Features - All features (cache, safety, analytics) work independently
- 🛡️ Production-Ready Error Handling - Consistent error messages with better context
- 🔧 No Breaking Changes - Existing code continues to work
Resource Cleanup (New Feature)
import { forge } from '@catalystlabs/tryai';
const ai = forge();
// ... use AI ...
await ai.close(); // Clean up all resourcesLicense
MIT
