npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@inkeep/openbolts-adapter-vercel-ai

v0.0.5

Published

Adapter that connects OpenBolts MCP tools to the [Vercel AI SDK](https://sdk.vercel.ai/) `ToolLoopAgent`. Supports any AI SDK-compatible provider (Anthropic, OpenAI, Claude Code, etc.) and exposes a provider-agnostic `Adapter` interface with structured it

Readme

@inkeep/openbolts-adapter-vercel-ai

Adapter that connects OpenBolts MCP tools to the Vercel AI SDK ToolLoopAgent. Supports any AI SDK-compatible provider (Anthropic, OpenAI, Claude Code, etc.) and exposes a provider-agnostic Adapter interface with structured iteration results, cost tracking, and budget gating.

Installation

bun add @inkeep/openbolts-adapter-vercel-ai ai @ai-sdk/mcp

Install the provider package for your model:

bun add @ai-sdk/anthropic           # Anthropic (Claude)
bun add @ai-sdk/openai              # OpenAI (GPT-4o)
bun add ai-sdk-provider-claude-code # Claude Code

Setting up your MCP server

The adapter spawns an MCP server via mcpServerPath (stdio transport) or connects via mcpServerUrl (SSE transport). For the stdio path, you need a script that boots an MCP server and connects it to StdioServerTransport. The minimal mcp-server.ts referenced in the examples below looks like:

// mcp-server.ts
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { createDatabaseClient } from '@inkeep/openbolts-core';
import { createMcpServer } from '@inkeep/openbolts-mcp';

const db = await createDatabaseClient();
const server = createMcpServer({ db });
const transport = new StdioServerTransport();
await server.connect(transport);

mcpServerPath is fed to bun run <path>, so the file must be executable as a Bun entry. Pass templates: RegisteredEngine[] to createMcpServer to register run-<id> tools alongside the journal/task-management tools. For a turnkey aggregator binary across third-party plugins, use npx @inkeep/openbolts-cli mcp instead.

Quick Start

Anthropic (Claude Sonnet)

import { anthropic } from '@ai-sdk/anthropic';
import { VercelAiAdapter } from '@inkeep/openbolts-adapter-vercel-ai';

const adapter = new VercelAiAdapter({
  model: anthropic('claude-sonnet-4-20250514'),
  mcpServerPath: './mcp-server.ts', // your stdio MCP server script
  providerOptions: {
    anthropic: { cacheControl: true },
  },
});

await adapter.init();
const { result } = adapter.spawn({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'List all items' }] }],
});
const output = await result;
console.log(output.exitReason, output.cost);
await adapter.cleanup();

Claude Code

import { claudeCode } from 'ai-sdk-provider-claude-code';
import { VercelAiAdapter } from '@inkeep/openbolts-adapter-vercel-ai';

const adapter = new VercelAiAdapter({
  model: claudeCode('claude-code'),
  mcpServerPath: './mcp-server.ts', // your stdio MCP server script
});

await adapter.init();
const { result, abort } = adapter.spawn({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Create a new item called "Demo"' }] }],
  system: 'You are a helpful assistant.',
});
const output = await result;
await adapter.cleanup();

OpenAI (GPT-4o)

import { openai } from '@ai-sdk/openai';
import { VercelAiAdapter } from '@inkeep/openbolts-adapter-vercel-ai';

const adapter = new VercelAiAdapter({
  model: openai('gpt-4o'),
  mcpServerUrl: 'http://localhost:3000/sse',
  maxBudgetUsd: 0.50,
});

await adapter.init();
const { result } = adapter.spawn({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Show me all items' }] }],
});
const output = await result;
await adapter.cleanup();

API Reference

VercelAiAdapterConfig

| Field | Type | Required | Description | |-------|------|----------|-------------| | model | LanguageModel | Yes | Any AI SDK-compatible model | | mcpServerPath | string | * | Path to MCP server script (stdio transport) | | mcpServerUrl | string | * | URL to MCP server (SSE transport) | | stopWhen | StopCondition | No | When to stop the tool loop (default: stepCountIs(50)) | | providerOptions | ProviderOptions | No | Provider-specific options (e.g., cache control, extended thinking) | | onPrepareStep | (ctx: PrepareStepContext) => void | No | Hook called before each step | | maxBudgetUsd | number | No | Cost ceiling — stops the loop when exceeded | | vercelAiMaxRetries | number | No | Max retries at the AI SDK layer (default: 0). See Breaking changes below. |

* One of mcpServerPath or mcpServerUrl is required.

Breaking changes (unify-engine-resume)

  • Default maxRetries changed from 2 (AI SDK default) to 0. The session manager at @inkeep/openbolts-engine-runtime (Layer 4) owns the end-to-end retry contract — stacking SDK-level retries under the session manager double-counts Retry-After waits and distorts telemetry.totalRetryDelayMs. Restore legacy behavior via new VercelAiAdapter({ model, vercelAiMaxRetries: 2 }). Custom adapters wrapping other SDKs should follow the same pattern.
  • isTransientError(error) now returns true for api_error (5xx / 529 / "server error" / "bad gateway"). Previously these escalated on iteration 1; the session manager now retries with exponential backoff up to limits.maxIterations. Callers using the predicate directly may want to layer their own checks on classifyError(error) === 'api_error' if they needed the old escalate-on-server-error behavior.
  • error.suggestedRetryDelayMs is now populated from APICallError.responseHeaders (Retry-After, Retry-After-Ms). The session manager's computeRetryDelay already honored this forward-compat field; this PR lights it up. Custom onIterationEnd hooks that inspect outcome.error?.suggestedRetryDelayMs will now receive provider hints. Clamped to limits.retryDelay.maxMs on the consumer side.

Adapter interface

interface Adapter {
  init(): Promise<void>;            // Connect to MCP server, discover tools
  spawn(opts: SpawnOpts): IterationHandle;  // Start an agent loop
  cleanup(): Promise<void>;         // Close MCP connection
  overhead(): { tools: string[] };  // List discovered tool names
}

SpawnOpts

| Field | Type | Description | |-------|------|-------------| | messages | ModelMessage[] | Conversation messages for the agent (required, non-empty). The adapter always calls agent.generate({ messages }). | | system | string? | System instructions | | allowedTools | string[]? | Whitelist of tool names (overrides disallowedTools) | | disallowedTools | string[]? | Blacklist of tool names |

IterationHandle

interface IterationHandle {
  result: Promise<AdapterIterationResult>;  // Resolves when the loop finishes
  abort: () => void;                        // Cancel the running loop
}

AdapterIterationResult

| Field | Type | Description | |-------|------|-------------| | exitReason | ExitReason | 'success' | 'steps_exhausted' | 'budget_exhausted' | 'context_limit' | 'aborted' | 'crashed' | 'unknown' | | steps | number | Total steps executed | | usage | UsageSummary | Token counts (input, output, cache read/write) | | cost | number | Estimated USD cost | | duration | number | Wall-clock ms | | sessionId | string | Unique ID for this run | | toolCalls | ToolCallsSummary | Total calls, unique signatures, most repeated | | text | string | Final text output | | error | ErrorInfo | Present when exitReason is 'crashed' — includes message, category, isTransient. Enforced by discriminated union — always present for 'crashed', never for other exit reasons. |

UsageSummary

interface UsageSummary {
  inputTokens: number;
  outputTokens: number;
  cacheCreationInputTokens: number;
  cacheReadInputTokens: number;
}

Provider-Specific Features

Anthropic — Cache Control

Pass cacheControl: true via providerOptions to enable prompt caching. Cache token usage is tracked in UsageSummary.cacheCreationInputTokens and cacheReadInputTokens, and reflected in cost calculations.

providerOptions: {
  anthropic: { cacheControl: true },
}

Anthropic — Extended Thinking

Enable extended thinking via provider options:

providerOptions: {
  anthropic: {
    thinking: { type: 'enabled', budgetTokens: 10000 },
  },
}

Budget Gating

Set maxBudgetUsd to cap spending. The adapter computes running cost after each step and stops the tool loop when the budget is exceeded, returning exitReason: 'budget_exhausted'. Budget gating is skipped for the Claude Code provider (which manages budget natively).

Cost Tracking

Built-in pricing for claude-sonnet-4-*, claude-opus-4-*, gpt-4o, and gpt-4o-mini model families (prefix matching). Unknown models return cost: 0. When maxBudgetUsd is set, spawn() throws if the configured model has no pricing data.

Claude Code — Skill Preloading via Agents

When using claudeCode(), skills can be preloaded into the agent's context by defining a Claude Code agent with a skills: frontmatter field:

# .claude/agents/engine-worker.md
---
name: engine-worker
skills:
  - explore
  - research
---
You are an engine worker agent. Use your loaded skills to investigate and complete tasks.

Then pass the agent name in the provider config:

const adapter = new VercelAiAdapter({
  model: claudeCode('sonnet', {
    permissionMode: 'bypassPermissions',
    agent: 'engine-worker',  // preloads skills from agent frontmatter
  }),
  mcpServerPath: './mcp-server.ts', // your stdio MCP server script
});

This is more structured than prompt-based skill loading ("Load /x skill") — skills are injected at subprocess startup, not discovered mid-conversation. See Agent Skills in the SDK for details.

Note: The Claude Agent SDK does not provide a programmatic API for skill registration — skills must be filesystem artifacts (.claude/skills/*/SKILL.md). The agent field controls which skills are preloaded, not which skills exist.

Architecture

VercelAiAdapter
  ├── init()  →  connectMcpClient()  →  MCP server (stdio or SSE)
  ├── spawn({ messages }) →  ToolLoopAgent.generate({ messages })
  │                          ├── prepareStep bridge (tool filtering, budget gate, user hook)
  │                          └── MCP tools (discovered at init)
  └── cleanup() → closeMcpClient()

The adapter receives a unified messages: ModelMessage[] array and always calls agent.generate({ messages }) -- no provider branching. Provider-specific concerns (message replay vs session resume) are handled by the bridge layer in @inkeep/openbolts-engine-runtime, not in the adapter itself.