npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@planllama/ai-sdk

v0.2.0

Published

AI SDK integration for PlanLlama - Schedule and manage AI workflows with ease

Readme

@planllama/ai-sdk

AI SDK integration for PlanLlama - Schedule and manage AI workflows with distributed job execution powered by the Vercel AI SDK.

Overview

@planllama/ai-sdk is a powerful integration layer that combines Vercel's AI SDK with PlanLlama's distributed job scheduling platform. It enables AI agents to automatically distribute tool execution across workers, making it easy to build scalable, production-ready AI applications.

Key Features

  • 🤖 Seamless AI SDK Integration - Drop-in replacement for standard AI SDK agents
  • 🔄 Distributed Tool Execution - Automatically routes tool calls through PlanLlama's job queue
  • Scalable Architecture - Scale AI tool execution across multiple workers
  • 🎯 Request/Response Pattern - Built-in support for synchronous tool results
  • 🛠️ Zero Configuration - Works with existing AI SDK tools without modification
  • 🔌 Worker Registration - Automatically registers tools as background workers
  • 🌐 Polyglot Support - Create agent tools in JS/TS, Python, Ruby or Go, or any mix, and let PlanLlama do the communication

Installation

npm install @planllama/ai-sdk planllama ai

Quick Start

1. Get Your API Token

Sign up at https://planllama.io to get your API token from your project settings dashboard.

2. Create Your First Agent

import { AgentFactory, tool } from "@planllama/ai-sdk";
import { createOpenAI } from "@ai-sdk/openai";
import { z } from "zod";

// Initialize the agent factory
const factory = new AgentFactory({
  planLlamaApiToken: process.env.PLANLLAMA_API_TOKEN!,
});

// Create an AI agent with distributed tool execution
const agent = await factory.createAgent({
  model: createOpenAI({ 
    apiKey: process.env.OPENAI_API_KEY! 
  }).languageModel("gpt-4"),
  
  tools: {
    getWeather: tool({
      description: "Get the current weather for a given location",
      inputSchema: z.object({
        location: z.string().describe("The location to get the weather for"),
      }),
      execute: async (input) => {
        // This executes on a worker via PlanLlama
        const weather = await fetchWeatherAPI(input.location);
        return `Weather in ${input.location}: ${weather}`;
      },
    }),
  },
});

// Generate AI responses with distributed tool execution
const result = await agent.generate({
  prompt: "What's the weather in San Francisco?",
});

console.log(result.text);

How It Works

The @planllama/ai-sdk package wraps your AI SDK tools and automatically:

  1. Routes Tool Calls - When the AI agent needs to call a tool, the request is sent to PlanLlama's job queue
  2. Registers Workers - Tool implementations are automatically registered as workers that listen for jobs
  3. Executes Distributed - Workers process tool execution requests and return results
  4. Returns Results - The AI agent receives the tool result and continues generation

This architecture allows you to:

  • Scale tool execution independently from your AI agent
  • Run resource-intensive tools on dedicated worker machines
  • Implement fault tolerance and retry logic
  • Monitor and observe tool execution patterns

API Reference

AgentFactory

The main entry point for creating distributed AI agents.

Constructor

new AgentFactory(settings: PlanLlamaAgentSettings)

Settings Options:

// Option 1: Use an existing PlanLlama instance
{
  planLlama: PlanLlama;
}

// Option 2: Create a new PlanLlama instance from API token
{
  planLlamaApiToken: string;
  planLlamaServerUrl?: string; // Optional custom server URL
}

Methods

createAgent<T>(settings: AgentSettings<T>)

Creates a new AI agent with distributed tool execution.

  • Parameters:

    • settings: Standard Vercel AI SDK Agent settings
      • model: The language model to use
      • tools: Object mapping tool names to tool definitions
      • Additional agent configuration options
  • Returns: Promise resolving to a configured AI Agent

Example:

const agent = await factory.createAgent({
  model: createOpenAI().languageModel("gpt-4"),
  tools: {
    myTool: tool({
      description: "Description of what this tool does",
      inputSchema: z.object({ /* ... */ }),
      execute: async (input) => { /* ... */ },
    }),
  },
});

tool()

Re-exported from ai package for convenience. See AI SDK documentation for full details.

Advanced Usage

Using an Existing PlanLlama Instance

If you already have a PlanLlama instance in your application, you can reuse it:

import { PlanLlama } from "planllama";
import { AgentFactory } from "@planllama/ai-sdk";

const planLlama = new PlanLlama({
  apiToken: process.env.PLANLLAMA_API_TOKEN!,
});

await planLlama.start();

const factory = new AgentFactory({
  planLlama, // Reuse existing instance
});

const agent = await factory.createAgent({
  // ... agent settings
});

Custom Server URL

For self-hosted or development environments:

const factory = new AgentFactory({
  planLlamaApiToken: process.env.PLANLLAMA_API_TOKEN!,
  planLlamaServerUrl: "https://custom.planllama.io",
});

Multiple Tools Example

const agent = await factory.createAgent({
  model: createOpenAI().languageModel("gpt-4"),
  
  tools: {
    getWeather: tool({
      description: "Get weather for a location",
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        return await weatherAPI.get(location);
      },
    }),
    
    searchDatabase: tool({
      description: "Search the product database",
      inputSchema: z.object({
        query: z.string(),
        limit: z.number().optional(),
      }),
      execute: async ({ query, limit = 10 }) => {
        return await db.products.search(query, limit);
      },
    }),
    
    sendEmail: tool({
      description: "Send an email to a user",
      inputSchema: z.object({
        to: z.string().email(),
        subject: z.string(),
        body: z.string(),
      }),
      execute: async ({ to, subject, body }) => {
        await emailService.send({ to, subject, body });
        return "Email sent successfully";
      },
    }),
  },
});

Architecture

Distributed Execution Flow

┌─────────────┐         ┌──────────────┐         ┌─────────────┐
│  AI Agent   │ ──────> │  PlanLlama   │ ──────> │   Worker    │
│  (Client)   │         │    Queue     │         │  (Server)   │
└─────────────┘         └──────────────┘         └─────────────┘
      │                                                  │
      │                                                  │
      │ <────────── Result via Socket.IO ─────────────  │
  1. AI Agent calls a tool during generation
  2. AgentFactory intercepts the call and sends it to PlanLlama as a job request
  3. PlanLlama Queue routes the job to an available worker
  4. Worker executes the tool's execute function
  5. Result is returned to the AI agent via Socket.IO

Worker Registration

When you call createAgent(), the factory automatically:

// For each tool, this happens behind the scenes:
planLlama.work(toolName, async (job) => {
  return await tool.execute(job.data, options);
});

This registers your tool implementations as workers that listen for jobs from the PlanLlama queue.

Deployment Patterns

Single Process (Development)

Run both agent and workers in the same process:

const factory = new AgentFactory({ 
  planLlamaApiToken: process.env.PLANLLAMA_API_TOKEN! 
});

const agent = await factory.createAgent({ 
  /* ... */ 
});

// Both agent and workers are running
await agent.generate({ prompt: "..." });

Separate Workers (Production)

Agent Process:

// main-agent.ts
const agent = await factory.createAgent({
  tools: {
    expensiveTool: tool({
      description: "CPU intensive operation",
      inputSchema: z.object({ /* ... */ }),
      // No execute function - runs on separate workers
    }),
  },
});

Worker Process:

// worker.ts
const planLlama = new PlanLlama({
  apiToken: process.env.PLANLLAMA_API_TOKEN!,
});

await planLlama.start();

planLlama.work("expensiveTool", async (job) => {
  // Heavy computation happens here
  return performExpensiveOperation(job.data);
});

Comparison with Standard AI SDK

Standard AI SDK

import { Agent, tool } from "ai";

const agent = new Agent({
  model: openai("gpt-4"),
  tools: {
    myTool: tool({
      execute: async (input) => {
        // Executes locally in the same process
        return result;
      },
    }),
  },
});

With PlanLlama AI SDK

import { AgentFactory, tool } from "@planllama/ai-sdk";

const factory = new AgentFactory({
  planLlamaApiToken: process.env.PLANLLAMA_API_TOKEN!,
});

const agent = await factory.createAgent({
  model: openai("gpt-4"),
  tools: {
    myTool: tool({
      execute: async (input) => {
        // Executes on a distributed worker via PlanLlama
        return result;
      },
    }),
  },
});

The API is nearly identical - just wrap with AgentFactory and your tools automatically become distributed!

Caveats

  • Your tools must be named uniquely within PlanLlama as they map to queue names. If you try and run two agents both with a tool named foo you run the risk of the wrong version of foo being run.

Requirements

  • Node.js 16+
  • PlanLlama API token from planllama.io
  • Compatible with any AI SDK supported model provider (OpenAI, Anthropic, etc.)

Related Projects

License

MIT

Support