npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-advanced-tools

v0.1.4

Published

Provider-agnostic advanced tool use library for LLMs

Readme

LLM Advanced Tools - Provider-Agnostic Tool Use Library

A TypeScript library that brings advanced tool use features to all major LLM providers through Vercel AI SDK (OpenAI, Anthropic, Google, and more).

Features

🔍 Tool Search Tool

Dynamically discover and load tools on-demand instead of loading everything upfront.

Benefits:

  • Reduces token usage by deferring tool loading
  • Improved accuracy with large tool sets
  • Scale to hundreds or thousands of tools
  • Anthropic reports 85%+ token reduction in their testing

🚀 Programmatic Tool Calling

Orchestrate tools through code execution rather than individual API calls.

Benefits:

  • Keep intermediate results out of LLM context
  • Parallel tool execution
  • Better control flow with loops, conditionals, data transformations
  • Anthropic reports 37%+ token reduction on complex tasks in their testing

📝 Tool Use Examples

Provide sample invocations to improve tool call accuracy.

Benefits:

  • Show proper usage patterns
  • Clarify format conventions and optional parameters
  • Anthropic reports 18%+ accuracy improvement on complex parameters in their testing

Installation

npm install llm-advanced-tools

Quick Start

import { Client, ToolRegistry, VercelAIAdapter, ToolDefinition } from 'llm-advanced-tools';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

// 1. Create a tool registry
const registry = new ToolRegistry({
  strategy: 'smart',  // 'smart', 'keyword', or 'custom'
  maxResults: 5
});

// 2. Define tools with advanced features
const weatherTool: ToolDefinition = {
  name: 'get_weather',
  description: 'Get current weather for a location',
  inputSchema: {
    type: 'object',
    properties: {
      location: { type: 'string', description: 'City name' },
      units: {
        type: 'string',
        enum: ['celsius', 'fahrenheit'],
        description: 'Temperature units'
      }
    },
    required: ['location']
  },
  // Tool Use Examples - improve accuracy
  inputExamples: [
    { location: 'San Francisco', units: 'fahrenheit' },
    { location: 'Tokyo', units: 'celsius' }
  ],
  // Defer loading - only load when searched
  deferLoading: true,
  // Allow programmatic calling
  allowedCallers: ['code_execution'],
  handler: async (input) => {
    // Your implementation
    return { temp: 72, conditions: 'Sunny' };
  }
};

registry.register(weatherTool);

// 3. Create client with any provider via Vercel AI SDK

// Use with OpenAI GPT-5
const openaiClient = new Client({
  adapter: new VercelAIAdapter(openai('gpt-5')),
  enableToolSearch: true,
  enableProgrammaticCalling: true
}, registry);

// Or use with Anthropic Claude Sonnet 4.5
const claudeClient = new Client({
  adapter: new VercelAIAdapter(anthropic('claude-sonnet-4-5')),
  enableToolSearch: true,
  enableProgrammaticCalling: true
}, registry);

// Or use with Google Gemini
// const geminiClient = new Client({
//   adapter: new VercelAIAdapter(google('gemini-2.0-flash-exp')),
//   enableToolSearch: true,
//   enableProgrammaticCalling: true
// }, registry);

// 4. Chat!
const response = await openaiClient.ask("What's the weather in San Francisco?");
console.log(response);

Why Vercel AI SDK?

Benefits:

  • One Interface: Work with all major providers (OpenAI, Anthropic, Google, Mistral, etc.)
  • Easy Switching: Change providers by modifying one line of code
  • Latest Models: Support for GPT-5, Claude Sonnet 4.5, Gemini 2.0, and more
  • Advanced Features: Tool search, programmatic calling work across all providers
  • Type Safety: Full TypeScript support with excellent IDE integration
  • AI SDK 6 Ready: Compatible with the latest Vercel AI SDK v6.0

Architecture

┌─────────────────────────────────────────────────────────┐
│                 Your Application                        │
└─────────────────────────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────┐
│              Unified Tool Interface                     │
│  • ToolRegistry (search, defer loading)                 │
│  • CodeExecutor (programmatic calling)                  │
│  • ToolDefinition (with examples)                       │
└─────────────────────────────────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────┐
│            Vercel AI SDK Adapter                        │
│  Supports all Vercel AI SDK providers:                  │
│  • OpenAI (GPT-4, GPT-5)                                │
│  • Anthropic (Claude 3.5, Claude 4.5)                   │
│  • Google (Gemini)                                      │
│  • Mistral, Groq, Cohere, and more                      │
└─────────────────────────────────────────────────────────┘

How It Works

Tool Search Tool

For providers without native support, we implement client-side search:

  1. Tools marked with deferLoading: true are registered but not loaded
  2. A special tool_search tool is automatically added
  3. When LLM needs capabilities, it searches using the tool_search tool
  4. Only relevant tools are loaded into context
  5. Massive token savings (85%+ reduction)

Search Strategies:

  • smart: Intelligent relevance ranking using BM25 algorithm (recommended, default)
  • keyword: Fast keyword matching for exact terms
  • custom: Provide your own search function

Programmatic Tool Calling

For providers without native support, we use sandboxed code execution:

  1. Tools marked with allowedCallers: ['code_execution'] can be called from code
  2. LLM writes code to orchestrate multiple tool calls
  3. Code runs in sandbox (VM, Docker, or cloud service)
  4. Only final results enter LLM context, not intermediate data
  5. Supports parallel execution, loops, conditionals

Example:

Instead of this (traditional):

→ LLM: get_team_members("engineering")
← API: [20 members...]
→ LLM: get_expenses("emp_1", "Q3")
← API: [50 line items...]
... 19 more calls ...
→ LLM: Manual analysis of 1000+ line items

You get this (programmatic):

→ LLM: Writes code to orchestrate all calls
← Code runs in sandbox
← Only final results: [2 people who exceeded budget]

Tool Use Examples

For providers without native support, examples are injected into descriptions:

{
  name: "create_ticket",
  description: "Create a support ticket.

Examples:
1. {\"title\": \"Login broken\", \"priority\": \"critical\", ...}
2. {\"title\": \"Feature request\", \"labels\": [\"enhancement\"]}",
  // ...
}

The LLM learns proper usage patterns from the examples.

Provider Support

All providers supported through Vercel AI SDK:

| Provider | Tool Search | Code Execution | Examples | Latest Models | |----------|------------|----------------|----------|---------------| | OpenAI | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | GPT-5, GPT-4o | | Anthropic | ✅ (native + emulated) | ✅ (native + emulated) | ✅ (native + emulated) | Claude Sonnet 4.5 | | Google | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Gemini 2.0 | | Mistral | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest | | Groq | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest | | Cohere | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest |

Note: Anthropic models have native support for these features. For other providers, features are emulated client-side.

Configuration

Search Configuration

const registry = new ToolRegistry({
  strategy: 'smart',       // 'smart' (default), 'keyword', or 'custom'
  maxResults: 10,          // Max tools to return per search
  threshold: 0.0,          // Minimum relevance score (0-100)
  customSearchFn: async (query, tools) => {
    // Your custom search logic (only needed if strategy is 'custom')
    return filteredTools;
  }
});

Strategy Guide:

  • smart: Best for most cases - understands relevance and context
  • keyword: Fast exact matching - use when you know exact tool names
  • custom: Advanced - provide your own search algorithm

Code Executor Configuration

const client = new Client({
  adapter: new VercelAIAdapter(openai('gpt-5')),
  enableProgrammaticCalling: true,
  executorConfig: {
    timeout: 30000,        // 30 seconds
    memoryLimit: '256mb',
    environment: {         // Environment variables
      NODE_ENV: 'production'
    }
  }
});

API Reference

ToolDefinition

interface ToolDefinition {
  name: string;
  description: string;
  inputSchema: JSONSchema | ZodSchema;
  inputExamples?: any[];           // Tool Use Examples
  deferLoading?: boolean;          // For Tool Search
  allowedCallers?: string[];       // For Programmatic Calling
  handler: (input: any) => Promise<any>;
}

ToolRegistry

class ToolRegistry {
  register(tool: ToolDefinition): void
  registerMany(tools: ToolDefinition[]): void
  search(query: string, maxResults?: number): Promise<ToolDefinition[]>
  get(name: string): ToolDefinition | undefined
  getLoadedTools(): ToolDefinition[]
  getStats(): { total: number; loaded: number; deferred: number }
}

Client

class Client {
  constructor(config: ClientConfig, registry?: ToolRegistry)
  chat(request: ChatRequest): Promise<ChatResponse>
  ask(prompt: string, systemPrompt?: string): Promise<string>
  getRegistry(): ToolRegistry
}

When to Use Each Feature

Tool Search Tool

Use when:

  • Tool definitions consuming >10K tokens
  • Experiencing tool selection accuracy issues
  • Building MCP-powered systems with multiple servers
  • 10+ tools available

Skip when:

  • Small tool library (<10 tools)
  • All tools used frequently
  • Tool definitions are compact

Programmatic Tool Calling

Use when:

  • Processing large datasets where you only need aggregates
  • Running multi-step workflows with 3+ dependent tool calls
  • Filtering, sorting, or transforming tool results
  • Handling tasks where intermediate data shouldn't influence reasoning
  • Running parallel operations across many items

Skip when:

  • Making simple single-tool invocations
  • Working on tasks where LLM should see all intermediate results
  • Running quick lookups with small responses

Tool Use Examples

Use when:

  • Complex nested structures where valid JSON doesn't imply correct usage
  • Tools with many optional parameters
  • APIs with domain-specific conventions
  • Similar tools where examples clarify which to use

Skip when:

  • Simple single-parameter tools with obvious usage
  • Standard formats (URLs, emails) that LLM already understands
  • Validation concerns better handled by JSON Schema

Sandboxing Options

The default VM executor is NOT secure for untrusted code. For production:

  1. Docker (recommended for local): Full isolation, requires Docker installed
  2. E2B: Cloud sandbox service, easy setup, scalable
  3. Modal: Serverless containers
  4. Custom: Implement CodeExecutor interface

Changelog

v0.1.3 (Latest)

AI SDK 6 Support & Latest Models

  • AI SDK 6: Full support for Vercel AI SDK v6.0
  • Latest Models: Support for GPT-5, Claude Sonnet 4.5, Gemini 2.0
  • Critical Fix: Changed tool definitions from parameters to inputSchema (AI SDK 6 requirement)
  • Simplified: Removed direct OpenAI adapter - use Vercel AI SDK for all providers
  • Improved: Better Zod schema conversion for complex types
  • Compatibility: Works with both AI SDK 5.x and 6.x

v0.1.2

Security & Compatibility Updates

  • Security Fix: Updated Vercel AI SDK adapter to support [email protected] (latest stable)
  • Security Fix: Resolved all npm audit vulnerabilities
  • Bug Fix: Removed circular dependency in package.json
  • Breaking Change Support: Full compatibility with [email protected] breaking changes

v0.1.1

  • Initial release with OpenAI and Vercel AI adapters
  • Tool search and deferral loading
  • Programmatic code execution

Roadmap

  • [x] Core library with Vercel AI SDK adapter
  • [x] AI SDK 6 support
  • [x] Latest model support (GPT-5, Claude Sonnet 4.5)
  • [ ] Docker-based executor
  • [ ] E2B integration
  • [ ] Streaming support
  • [ ] Async tool execution
  • [ ] LangChain/LlamaIndex integration

Contributing

Contributions welcome! Please see CONTRIBUTING.md.

License

MIT

Credits

This library implements features described in Anthropic's blog post: Introducing advanced tool use on the Claude Developer Platform

The implementation is provider-agnostic and works with any LLM that supports function calling through Vercel AI SDK.