npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@catalystlabs/tryai

v0.5.1

Published

Dead simple AI library. One line setup. Zero config. Just works.

Downloads

25

Readme

TryAI

Dead simple AI library. One line setup. Zero config. Just works.

Built on battle-tested providers from production systems.

⚠️ Important: Semantic caching requires deploying your own Modal embedding service. Without it, the library still works perfectly with exact-match caching. See Configuration for details.

import { ai } from '@catalystlabs/tryai';

const response = await ai('Hello world');
console.log(response.content);

That's it. You're using AI.

Install

npm install @catalystlabs/tryai

# Run the setup wizard to configure your environment
npx @catalystlabs/tryai setup

Features

  • Zero Config - Just works with environment variables
  • Simple - One function to rule them all
  • Providers - OpenAI, Anthropic, Llama 4.0, LM Studio (local)
  • Conversation Management - Built-in state management with @xstate/store
  • Prompt Templates - TOML-based, simple variable replacement
  • Pipelines - Chain prompts together with context
  • De-identification - Remove PII automatically
  • Streaming - Built-in streaming support
  • TypeScript - Full type safety
  • Intelligent Routing - Automatic model selection based on task complexity

Quick Start

Simplest Usage

import { ai } from '@catalystlabs/tryai';

// Just works (uses OPENAI_API_KEY from env)
const response = await ai('What is 2+2?');
console.log(response.content); // "4"

Streaming

for await (const chunk of ai.stream('Tell me a story')) {
  process.stdout.write(chunk.content);
}

Different Models

// Use GPT-4
await ai.model('gpt-4').send('Complex question here');

// Use GPT-3.5 (cheaper)
await ai.model('gpt-3.5-turbo').send('Simple question');

// Use Claude
import { providers } from '@catalystlabs/tryai';
const claude = providers.claude();
await claude.send('Hello Claude!');

Prompt Templates

Define in Code

import { definePrompts, prompt } from '@catalystlabs/tryai';

definePrompts({
  greeting: 'Say hello to {{name}} in {{language}}',
  
  summarize: {
    prompt: 'Summarize in {{words}} words: {{text}}',
    options: { temperature: 0.3 }
  }
});

// Use them
const response = await prompt('greeting', {
  name: 'Alice',
  language: 'Spanish'
});

Or Use TOML Files

Create prompts.toml:

greeting = "Say hello to {{name}} in {{language}}"

[summarize]
prompt = "Summarize in {{words}} words: {{text}}"
options = { temperature = 0.3 }

Then:

import { prompt } from '@catalystlabs/tryai';

const response = await prompt('greeting', {
  name: 'Alice',
  language: 'Spanish'  
});

Conversations

Built-in conversation state management:

import { forge } from '@catalystlabs/tryai';

const ai = forge();

// Simple chat - conversation managed automatically
const response1 = await ai.chat('What is 2+2?');
const response2 = await ai.chat('What was my previous question?');
// AI remembers: "Your previous question was 'What is 2+2?'"

// Streaming with conversation context
for await (const chunk of ai.chatStream('Tell me a story')) {
  process.stdout.write(chunk.content);
}

// Access conversation methods
ai.conversation.addSystemMessage('You are a helpful pirate');
ai.conversation.getMessages();        // Get all messages
ai.conversation.getMessageCount();    // Count messages
ai.conversation.clear();              // Clear messages
ai.conversation.end();                // End conversation

// Export/Import conversations
const data = ai.conversation.export();
// ... save to database ...
ai.conversation.import(data);

// Subscribe to changes
ai.conversation.subscribe((snapshot) => {
  console.log('Messages:', snapshot.context.messages);
});

Conversation API

// Message Management
ai.conversation.start(id?: string);                    // Start new conversation
ai.conversation.addUserMessage(content: string);       // Add user message
ai.conversation.addSystemMessage(content: string);     // Add system message
ai.conversation.addAIResponse(response, time?);        // Add AI response

// State Access
ai.conversation.getMessages();                         // All messages
ai.conversation.getMessagesForAPI(includeSystem?);     // API-ready format
ai.conversation.getLastMessage();                      // Most recent message
ai.conversation.getUserMessages();                     // User messages only
ai.conversation.getAssistantMessages();                // AI messages only
ai.conversation.getId();                               // Conversation ID
ai.conversation.getMessageCount();                     // Total messages
ai.conversation.hasConversation();                     // Check if active

// State Management
ai.conversation.clear();                               // Clear all messages
ai.conversation.end();                                 // End conversation
ai.conversation.updateSettings(settings);              // Update settings
ai.conversation.setError(error);                       // Set error state
ai.conversation.clearError();                          // Clear error

// Metadata & Settings
ai.conversation.getSettings();                         // Get settings
ai.conversation.getMetadata();                         // Get metadata
ai.conversation.isLoading();                           // Check loading state
ai.conversation.getError();                            // Get current error

// Import/Export
ai.conversation.export();                              // Export to JSON
ai.conversation.import(data);                          // Import from JSON

// Subscriptions
ai.conversation.subscribe(callback);                   // Subscribe to changes
ai.conversation.destroy();                             // Cleanup subscriptions

Pipelines

Advanced prompt chaining with context:

import { createPipeline } from '@catalystlabs/tryai';

const pipeline = createPipeline(ai)
  .prompt('List 3 random animals')
  .iterate(3, 'Tell me about {{previous}}')
  .transform((response, context) => response.toUpperCase())
  .accumulate('results')
  .run();

De-identification

Automatically remove PII:

import { deidentify } from '@catalystlabs/tryai';

const response = await deidentify.ai(
  'Email [email protected] at 555-1234',
  { keepMapping: true }
);

// PII is removed before sending to AI
// and restored in the response

Configuration

Environment Variables

# AI Provider Keys (at least one required)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# Modal Embedding Service (OPTIONAL - for semantic caching)
# Users must deploy their own Modal service!
# See: https://git.catalystlab.cc/catalyst/tryai/tree/main/modal_services
MODAL_EMBEDDING_ENDPOINT=https://your-modal-app.modal.run

Manual Setup

import { forge } from '@catalystlabs/tryai';

const ai = forge({
  provider: 'openai',
  apiKey: 'sk-...',
  model: 'gpt-4',
  temperature: 0.7
});

Pre-configured Providers

import { providers } from '@catalystlabs/tryai';

const gpt4 = providers.openai();    // GPT-4
const gpt35 = providers.gpt35();     // GPT-3.5-turbo  
const claude = providers.claude();    // Claude Opus
const sonnet = providers.sonnet();    // Claude Sonnet (faster)
const llama4 = providers.llama4();    // Llama 4.0 405B
const llama70b = providers.llama4_70b(); // Llama 4.0 70B
const vision = providers.llama_vision(); // Llama Vision
const local = providers.local();      // LM Studio local models

API

Core Functions

  • ai(message, options?) - Send a message
  • ai.stream(message, options?) - Stream a response
  • ai.model(name) - Use a different model

Prompt Functions

  • prompt(name, variables?, options?) - Use a template
  • definePrompts(templates) - Define templates in code
  • loadPrompts(path) - Load TOML file

Chain Functions

  • chain() - Start a chain
  • .step(message, options?) - Add a step
  • .prompt(name, variables?) - Use template in chain
  • .run() - Execute the chain

De-identification

  • deidentify(text, options?) - Remove PII
  • deidentify.ai(message, options?) - AI call with PII removal
  • reidentify(text) - Restore PII
  • clearMappings() - Clear stored mappings

Examples

Check out the /examples folder for more:

  • simple.ts - All features demonstrated
  • chatbot.ts - Simple chatbot
  • pipeline.ts - Complex processing pipeline

Philosophy

AI should be simple. This library is intentionally minimal.

No complex abstractions. No enterprise patterns. No bullshit.

Just simple functions that work.

Advanced Features

While the API is simple, the underlying providers are battle-tested and support all advanced features:

Full Message Control

// Pass complete message arrays
const response = await ai([
  { role: 'system', content: 'You are a helpful assistant' },
  { role: 'user', content: 'Hello' }
]);

All Provider Options

// OpenAI with all parameters
await ai('Generate text', {
  model: 'gpt-4',
  temperature: 0.9,
  maxTokens: 100,
  topP: 0.95,
  frequencyPenalty: 0.5,
  presencePenalty: 0.5,
  stop: ['\n\n'],
  responseFormat: { type: 'json_object' }
});

// Anthropic with specific options
await ai('Generate text', {
  model: 'claude-3-opus-20240229',
  temperature: 0.7,
  topK: 40,
  stopSequences: ['\n\n']
});

Streaming with Proper Chunks

for await (const chunk of ai.stream('Tell a story')) {
  if (chunk.content) process.stdout.write(chunk.content);
  if (chunk.usage) console.log('Tokens:', chunk.usage);
}

See /examples/advanced.ts for more.

Intelligent Routing Mode

Use LLMRouter for automatic model selection based on task complexity and priority:

// Zero config - just set LLMROUTER_URL in .env
const response = await ai.routed("Explain quantum computing");
console.log(response.text);
console.log(response.metadata.selectedModel); // "gpt-4" or "gpt-3.5-turbo" etc.
console.log(response.metadata.cost); // 0.0023

// Submit feedback to improve routing
await response.submitFeedback({
  helpful: true,
  rating: 5,
  comment: "Perfect explanation"
});

Priority Modes

Control the quality/cost tradeoff:

// Highest quality, regardless of cost
const premium = await ai.routed("Complex medical diagnosis", {
  priority: "quality_first"
});

// Balance quality and cost (default)
const balanced = await ai.routed("Summarize this article", {
  priority: "balanced"
});

// Minimize cost, accept lower quality
const budget = await ai.routed("Simple question", {
  priority: "aggressive_cost"
});

Configuration

# Required
LLMROUTER_URL=http://localhost:8000

# Optional
LLMROUTER_API_KEY=your-key

The router analyzes your prompt across 7 dimensions (creativity, reasoning, etc.) and selects the optimal model based on your priority mode. It includes semantic caching for 70%+ cost savings on similar queries.

v0.3.0 Release Notes

Major Improvements

  • 🔒 Enhanced Security - Path validation prevents directory traversal attacks
  • 🎯 Improved Type Safety - Replaced most any types with proper TypeScript interfaces
  • 💾 Better Resource Management - Added close() methods for proper cleanup
  • 🚀 Truly Optional Features - All features (cache, safety, analytics) work independently
  • 🛡️ Production-Ready Error Handling - Consistent error messages with better context
  • 🔧 No Breaking Changes - Existing code continues to work

Resource Cleanup (New Feature)

import { forge } from '@catalystlabs/tryai';

const ai = forge();
// ... use AI ...
await ai.close(); // Clean up all resources

License

MIT