llm-advanced-tools
v0.1.4
Published
Provider-agnostic advanced tool use library for LLMs
Maintainers
Readme
LLM Advanced Tools - Provider-Agnostic Tool Use Library
A TypeScript library that brings advanced tool use features to all major LLM providers through Vercel AI SDK (OpenAI, Anthropic, Google, and more).
Features
🔍 Tool Search Tool
Dynamically discover and load tools on-demand instead of loading everything upfront.
Benefits:
- Reduces token usage by deferring tool loading
- Improved accuracy with large tool sets
- Scale to hundreds or thousands of tools
- Anthropic reports 85%+ token reduction in their testing
🚀 Programmatic Tool Calling
Orchestrate tools through code execution rather than individual API calls.
Benefits:
- Keep intermediate results out of LLM context
- Parallel tool execution
- Better control flow with loops, conditionals, data transformations
- Anthropic reports 37%+ token reduction on complex tasks in their testing
📝 Tool Use Examples
Provide sample invocations to improve tool call accuracy.
Benefits:
- Show proper usage patterns
- Clarify format conventions and optional parameters
- Anthropic reports 18%+ accuracy improvement on complex parameters in their testing
Installation
npm install llm-advanced-toolsQuick Start
import { Client, ToolRegistry, VercelAIAdapter, ToolDefinition } from 'llm-advanced-tools';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
// 1. Create a tool registry
const registry = new ToolRegistry({
strategy: 'smart', // 'smart', 'keyword', or 'custom'
maxResults: 5
});
// 2. Define tools with advanced features
const weatherTool: ToolDefinition = {
name: 'get_weather',
description: 'Get current weather for a location',
inputSchema: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' },
units: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'Temperature units'
}
},
required: ['location']
},
// Tool Use Examples - improve accuracy
inputExamples: [
{ location: 'San Francisco', units: 'fahrenheit' },
{ location: 'Tokyo', units: 'celsius' }
],
// Defer loading - only load when searched
deferLoading: true,
// Allow programmatic calling
allowedCallers: ['code_execution'],
handler: async (input) => {
// Your implementation
return { temp: 72, conditions: 'Sunny' };
}
};
registry.register(weatherTool);
// 3. Create client with any provider via Vercel AI SDK
// Use with OpenAI GPT-5
const openaiClient = new Client({
adapter: new VercelAIAdapter(openai('gpt-5')),
enableToolSearch: true,
enableProgrammaticCalling: true
}, registry);
// Or use with Anthropic Claude Sonnet 4.5
const claudeClient = new Client({
adapter: new VercelAIAdapter(anthropic('claude-sonnet-4-5')),
enableToolSearch: true,
enableProgrammaticCalling: true
}, registry);
// Or use with Google Gemini
// const geminiClient = new Client({
// adapter: new VercelAIAdapter(google('gemini-2.0-flash-exp')),
// enableToolSearch: true,
// enableProgrammaticCalling: true
// }, registry);
// 4. Chat!
const response = await openaiClient.ask("What's the weather in San Francisco?");
console.log(response);Why Vercel AI SDK?
Benefits:
- ✅ One Interface: Work with all major providers (OpenAI, Anthropic, Google, Mistral, etc.)
- ✅ Easy Switching: Change providers by modifying one line of code
- ✅ Latest Models: Support for GPT-5, Claude Sonnet 4.5, Gemini 2.0, and more
- ✅ Advanced Features: Tool search, programmatic calling work across all providers
- ✅ Type Safety: Full TypeScript support with excellent IDE integration
- ✅ AI SDK 6 Ready: Compatible with the latest Vercel AI SDK v6.0
Architecture
┌─────────────────────────────────────────────────────────┐
│ Your Application │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Unified Tool Interface │
│ • ToolRegistry (search, defer loading) │
│ • CodeExecutor (programmatic calling) │
│ • ToolDefinition (with examples) │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Vercel AI SDK Adapter │
│ Supports all Vercel AI SDK providers: │
│ • OpenAI (GPT-4, GPT-5) │
│ • Anthropic (Claude 3.5, Claude 4.5) │
│ • Google (Gemini) │
│ • Mistral, Groq, Cohere, and more │
└─────────────────────────────────────────────────────────┘How It Works
Tool Search Tool
For providers without native support, we implement client-side search:
- Tools marked with
deferLoading: trueare registered but not loaded - A special
tool_searchtool is automatically added - When LLM needs capabilities, it searches using the tool_search tool
- Only relevant tools are loaded into context
- Massive token savings (85%+ reduction)
Search Strategies:
- smart: Intelligent relevance ranking using BM25 algorithm (recommended, default)
- keyword: Fast keyword matching for exact terms
- custom: Provide your own search function
Programmatic Tool Calling
For providers without native support, we use sandboxed code execution:
- Tools marked with
allowedCallers: ['code_execution']can be called from code - LLM writes code to orchestrate multiple tool calls
- Code runs in sandbox (VM, Docker, or cloud service)
- Only final results enter LLM context, not intermediate data
- Supports parallel execution, loops, conditionals
Example:
Instead of this (traditional):
→ LLM: get_team_members("engineering")
← API: [20 members...]
→ LLM: get_expenses("emp_1", "Q3")
← API: [50 line items...]
... 19 more calls ...
→ LLM: Manual analysis of 1000+ line itemsYou get this (programmatic):
→ LLM: Writes code to orchestrate all calls
← Code runs in sandbox
← Only final results: [2 people who exceeded budget]Tool Use Examples
For providers without native support, examples are injected into descriptions:
{
name: "create_ticket",
description: "Create a support ticket.
Examples:
1. {\"title\": \"Login broken\", \"priority\": \"critical\", ...}
2. {\"title\": \"Feature request\", \"labels\": [\"enhancement\"]}",
// ...
}The LLM learns proper usage patterns from the examples.
Provider Support
All providers supported through Vercel AI SDK:
| Provider | Tool Search | Code Execution | Examples | Latest Models | |----------|------------|----------------|----------|---------------| | OpenAI | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | GPT-5, GPT-4o | | Anthropic | ✅ (native + emulated) | ✅ (native + emulated) | ✅ (native + emulated) | Claude Sonnet 4.5 | | Google | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Gemini 2.0 | | Mistral | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest | | Groq | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest | | Cohere | ✅ (emulated) | ✅ (emulated) | ✅ (emulated) | Latest |
Note: Anthropic models have native support for these features. For other providers, features are emulated client-side.
Configuration
Search Configuration
const registry = new ToolRegistry({
strategy: 'smart', // 'smart' (default), 'keyword', or 'custom'
maxResults: 10, // Max tools to return per search
threshold: 0.0, // Minimum relevance score (0-100)
customSearchFn: async (query, tools) => {
// Your custom search logic (only needed if strategy is 'custom')
return filteredTools;
}
});Strategy Guide:
smart: Best for most cases - understands relevance and contextkeyword: Fast exact matching - use when you know exact tool namescustom: Advanced - provide your own search algorithm
Code Executor Configuration
const client = new Client({
adapter: new VercelAIAdapter(openai('gpt-5')),
enableProgrammaticCalling: true,
executorConfig: {
timeout: 30000, // 30 seconds
memoryLimit: '256mb',
environment: { // Environment variables
NODE_ENV: 'production'
}
}
});API Reference
ToolDefinition
interface ToolDefinition {
name: string;
description: string;
inputSchema: JSONSchema | ZodSchema;
inputExamples?: any[]; // Tool Use Examples
deferLoading?: boolean; // For Tool Search
allowedCallers?: string[]; // For Programmatic Calling
handler: (input: any) => Promise<any>;
}ToolRegistry
class ToolRegistry {
register(tool: ToolDefinition): void
registerMany(tools: ToolDefinition[]): void
search(query: string, maxResults?: number): Promise<ToolDefinition[]>
get(name: string): ToolDefinition | undefined
getLoadedTools(): ToolDefinition[]
getStats(): { total: number; loaded: number; deferred: number }
}Client
class Client {
constructor(config: ClientConfig, registry?: ToolRegistry)
chat(request: ChatRequest): Promise<ChatResponse>
ask(prompt: string, systemPrompt?: string): Promise<string>
getRegistry(): ToolRegistry
}When to Use Each Feature
Tool Search Tool
Use when:
- Tool definitions consuming >10K tokens
- Experiencing tool selection accuracy issues
- Building MCP-powered systems with multiple servers
- 10+ tools available
Skip when:
- Small tool library (<10 tools)
- All tools used frequently
- Tool definitions are compact
Programmatic Tool Calling
Use when:
- Processing large datasets where you only need aggregates
- Running multi-step workflows with 3+ dependent tool calls
- Filtering, sorting, or transforming tool results
- Handling tasks where intermediate data shouldn't influence reasoning
- Running parallel operations across many items
Skip when:
- Making simple single-tool invocations
- Working on tasks where LLM should see all intermediate results
- Running quick lookups with small responses
Tool Use Examples
Use when:
- Complex nested structures where valid JSON doesn't imply correct usage
- Tools with many optional parameters
- APIs with domain-specific conventions
- Similar tools where examples clarify which to use
Skip when:
- Simple single-parameter tools with obvious usage
- Standard formats (URLs, emails) that LLM already understands
- Validation concerns better handled by JSON Schema
Sandboxing Options
The default VM executor is NOT secure for untrusted code. For production:
- Docker (recommended for local): Full isolation, requires Docker installed
- E2B: Cloud sandbox service, easy setup, scalable
- Modal: Serverless containers
- Custom: Implement
CodeExecutorinterface
Changelog
v0.1.3 (Latest)
AI SDK 6 Support & Latest Models
- ✅ AI SDK 6: Full support for Vercel AI SDK v6.0
- ✅ Latest Models: Support for GPT-5, Claude Sonnet 4.5, Gemini 2.0
- ✅ Critical Fix: Changed tool definitions from
parameterstoinputSchema(AI SDK 6 requirement) - ✅ Simplified: Removed direct OpenAI adapter - use Vercel AI SDK for all providers
- ✅ Improved: Better Zod schema conversion for complex types
- ✅ Compatibility: Works with both AI SDK 5.x and 6.x
v0.1.2
Security & Compatibility Updates
- ✅ Security Fix: Updated Vercel AI SDK adapter to support [email protected] (latest stable)
- ✅ Security Fix: Resolved all npm audit vulnerabilities
- ✅ Bug Fix: Removed circular dependency in package.json
- ✅ Breaking Change Support: Full compatibility with [email protected] breaking changes
v0.1.1
- Initial release with OpenAI and Vercel AI adapters
- Tool search and deferral loading
- Programmatic code execution
Roadmap
- [x] Core library with Vercel AI SDK adapter
- [x] AI SDK 6 support
- [x] Latest model support (GPT-5, Claude Sonnet 4.5)
- [ ] Docker-based executor
- [ ] E2B integration
- [ ] Streaming support
- [ ] Async tool execution
- [ ] LangChain/LlamaIndex integration
Contributing
Contributions welcome! Please see CONTRIBUTING.md.
License
MIT
Credits
This library implements features described in Anthropic's blog post: Introducing advanced tool use on the Claude Developer Platform
The implementation is provider-agnostic and works with any LLM that supports function calling through Vercel AI SDK.
