npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ellie-ai/model-providers

v0.2.0

Published

LLM provider implementations for Ellie (OpenAI, Anthropic, Mistral)

Readme

@ellie-ai/model-providers

Model provider implementations for Ellie agents.

Overview

This package provides LLM provider implementations that integrate with @ellie-ai/agent-plugin. Each provider transforms between Ellie's internal conversation format and the provider's API format, handling both synchronous and streaming responses.

Installation

bun add @ellie-ai/model-providers

Available Providers

OpenAI

Full support for OpenAI models including GPT-4, GPT-4o, and o1 reasoning models.

import { openAI } from '@ellie-ai/model-providers';

// Simple usage with model name
const provider = openAI('gpt-4o-mini');

// Full configuration
const provider = openAI({
  model: 'gpt-4o',
  apiKey: 'sk-...', // defaults to process.env.OPENAI_API_KEY
  temperature: 0.7,
  maxTokens: 1000,
  baseUrl: 'https://api.openai.com/v1', // optional custom endpoint
  contextWindowSize: 128000, // default: 128000
});

// For o1 models with reasoning
const provider = openAI({
  model: 'o1-preview',
  reasoning: 'high', // 'low' | 'medium' | 'high'
});

Features:

  • ✅ Synchronous generation (generate())
  • ✅ Streaming generation (generateStream())
  • ✅ Function/tool calling
  • ✅ Reasoning output (o1 models)
  • ✅ Token usage tracking
  • ✅ Full test coverage (11 tests)

Anthropic

Full support for Anthropic Claude models including Claude 3.5 Sonnet, Claude 3 Opus/Sonnet/Haiku, and Claude 4 models with extended thinking.

import { anthropic } from '@ellie-ai/model-providers';

// Simple usage with model name
const provider = anthropic('claude-3-5-sonnet-20241022');

// Full configuration
const provider = anthropic({
  model: 'claude-3-5-sonnet-20241022',
  apiKey: 'sk-ant-...', // defaults to process.env.ANTHROPIC_API_KEY
  temperature: 0.7,
  maxTokens: 4096,
  contextWindowSize: 200000, // default: 200000
});

// For models with extended thinking
const provider = anthropic({
  model: 'claude-sonnet-4-5-20250929',
  thinking: {
    enabled: true,
    budgetTokens: 10000, // minimum: 1024
  },
});

Features:

  • ✅ Synchronous generation (generate())
  • ✅ Streaming generation (generateStream())
  • ✅ Function/tool calling
  • ✅ Extended thinking support (Claude 3.7+/4+)
  • ✅ Token usage tracking
  • ✅ Full test coverage (12 tests)

Grok (Coming Soon)

xAI Grok support is planned.

Provider Interface

All providers implement the ModelProvider interface from @ellie-ai/agent-plugin:

interface ModelProvider {
  /**
   * Generate a response synchronously
   */
  generate(
    conversation: ConversationItem[],
    tools: Tool[]
  ): Promise<ModelResponse>;

  /**
   * Generate a response with streaming (optional)
   *
   * Providers that support streaming should implement this method.
   * The middleware will automatically use streaming when available.
   */
  generateStream?(
    conversation: ConversationItem[],
    tools: Tool[],
    onToken: (chunk: { type: "content" | "function_call"; contentChunk: string }) => void,
    onReasoning: (chunk: { type: "reasoning"; reasoningChunk: string }) => void,
    signal?: AbortSignal
  ): Promise<ModelResponse>;
}

Conversation Format

Providers transform between Ellie's unified conversation format and provider-specific APIs:

type ConversationItem =
  | { type: "message"; role: "user" | "assistant" | "system"; content: string }
  | { type: "function_call"; id: string; name: string; args: Record<string, unknown> }
  | { type: "function_call_output"; call_id: string; output: string }
  | { type: "reasoning"; summary: string };

Streaming

When a provider implements generateStream(), the agent middleware automatically uses it and dispatches streaming delta actions:

  • AGENT_MODEL_STREAM_STARTED - Streaming begins
  • AGENT_MODEL_STREAM_CONTENT_DELTA - Content chunk received
  • AGENT_MODEL_STREAM_REASONING_DELTA - Reasoning chunk received (o1 models)
  • AGENT_MODEL_STREAM_COMPLETED - Streaming finished

Example: Observing streaming state

import { createRuntime } from '@ellie-ai/runtime';
import { agentPlugin } from '@ellie-ai/agent-plugin';
import { openAI } from '@ellie-ai/model-providers';

const agent = agentPlugin({
  model: openAI('gpt-4o-mini')
});

const runtime = createRuntime({ plugins: [agent] });

// Subscribe to state changes to see streaming updates
runtime.subscribe(() => {
  const state = runtime.getState();

  if (state.agent.isStreaming) {
    console.log('Current message:', state.agent.currentMessage);
    console.log('Current reasoning:', state.agent.currentReasoning);
  }
});

runtime.execute("Write a haiku about programming");

See packages/examples/src/streaming-response.ts for a full example.

Usage with Agent

import { createRuntime } from '@ellie-ai/runtime';
import { agentPlugin } from '@ellie-ai/agent-plugin';
import { openAI } from '@ellie-ai/model-providers';

const agent = agentPlugin({
  model: openAI('gpt-4o-mini'),
  tools: [/* your tools */],
  systemMessage: "You are a helpful assistant",
  maxLoops: 10,
});

const runtime = createRuntime({
  plugins: [agent],
});

const handle = runtime.execute("Hello!");
await handle.completed;

console.log(runtime.getState().agent.conversation);

Testing

Run the test suite:

bun test

Current coverage:

  • ✅ OpenAI provider (11 tests)
  • ✅ Anthropic provider (12 tests)
  • ⏳ Grok provider (not yet implemented)

Architecture

Helper Functions

The OpenAI provider uses extracted helper functions to eliminate code duplication:

  • transformConversationToOpenAI() - Converts internal format to OpenAI messages
  • transformToolsToOpenAI() - Converts Tool[] to OpenAI function definitions
  • buildRequestParams() - Builds request configuration
  • transformOpenAIResponseToConversation() - Converts OpenAI response to internal format

Streaming Implementation

Streaming providers:

  1. Accept onToken and onReasoning callbacks
  2. Call callbacks for each chunk as it arrives
  3. Return the final ModelResponse when complete

The agent middleware:

  1. Detects if generateStream() exists
  2. Dispatches delta actions when callbacks fire
  3. Updates state.agent.currentMessage and state.agent.currentReasoning
  4. Clears streaming state on completion

Roadmap

See TODO.md for the complete roadmap including:

  • Missing providers (Anthropic, Grok)
  • Error handling improvements (retries, rate limits, timeouts)
  • Advanced features (prompt caching, vision/multimodal, cost tracking)
  • Provider abstraction (fallback, registry, orchestration)

Contributing

When implementing a new provider:

  1. Create src/<provider>.ts implementing ModelProvider
  2. Add factory function (e.g., anthropic())
  3. Export from src/index.ts
  4. Add tests in src/__tests__/<provider>.test.ts
  5. Update this README
  6. Update TODO.md

See src/openai.ts as a reference implementation.

License

MIT