@quilltap/plugin-utils
v2.2.6
Published
Utility functions for Quilltap plugin development
Downloads
1,462
Maintainers
Readme
@quilltap/plugin-utils
Utility functions for Quilltap plugin development. This package provides runtime utilities that complement the type definitions in @quilltap/plugin-types.
Installation
npm install @quilltap/plugin-utils @quilltap/plugin-typesFeatures
Tool Parsing
Parse tool calls from any LLM provider's response format into a standardized ToolCallRequest[]:
import { parseToolCalls, parseOpenAIToolCalls } from '@quilltap/plugin-utils';
// Auto-detect format
const toolCalls = parseToolCalls(response, 'auto');
// Or use provider-specific parsers
const openaiCalls = parseOpenAIToolCalls(response);
const anthropicCalls = parseAnthropicToolCalls(response);
const googleCalls = parseGoogleToolCalls(response);Tool Format Conversion
Convert between OpenAI, Anthropic, and Google tool formats:
import {
convertToAnthropicFormat,
convertToGoogleFormat,
convertToolsTo
} from '@quilltap/plugin-utils';
// Convert a single tool
const anthropicTool = convertToAnthropicFormat(universalTool);
const googleTool = convertToGoogleFormat(universalTool);
// Convert multiple tools
const anthropicTools = convertToolsTo(tools, 'anthropic');Logger Bridge
Create a logger that integrates with Quilltap's core logging system when running inside the host application, or falls back to console logging when running standalone:
import { createPluginLogger } from '@quilltap/plugin-utils';
// Create a logger for your plugin
const logger = createPluginLogger('qtap-plugin-my-provider');
// Use it like any standard logger
logger.debug('Initializing provider', { version: '1.0.0' });
logger.info('Provider ready');
logger.warn('Rate limit approaching', { remaining: 10 });
logger.error('API call failed', { endpoint: '/chat' }, error);
// Create child loggers with additional context
const childLogger = logger.child({ component: 'auth' });
childLogger.info('Validating API key');When running inside Quilltap:
- Logs are routed to Quilltap's core logging system
- Logs appear in
logs/combined.logand console - Each log is tagged with
{ plugin: 'your-plugin-name', module: 'plugin' }
When running standalone:
- Logs are written to console with
[plugin-name]prefix - Respects
LOG_LEVELorQUILLTAP_LOG_LEVELenvironment variables
API Reference
Tool Parsing
| Function | Description |
|----------|-------------|
| parseToolCalls(response, format) | Parse tool calls with auto-detection or explicit format |
| parseOpenAIToolCalls(response) | Parse OpenAI/Grok format tool calls |
| parseAnthropicToolCalls(response) | Parse Anthropic format tool calls |
| parseGoogleToolCalls(response) | Parse Google Gemini format tool calls |
| detectToolCallFormat(response) | Detect the format of a response |
| hasToolCalls(response) | Check if a response contains tool calls |
Tool Conversion
| Function | Description |
|----------|-------------|
| convertToAnthropicFormat(tool) | Convert universal tool to Anthropic format |
| convertToGoogleFormat(tool) | Convert universal tool to Google format |
| convertFromAnthropicFormat(tool) | Convert Anthropic tool to universal format |
| convertFromGoogleFormat(tool) | Convert Google tool to universal format |
| convertToolTo(tool, target) | Convert a tool to any supported format |
| convertToolsTo(tools, target) | Convert multiple tools to any format |
| applyDescriptionLimit(tool, maxBytes) | Truncate tool description if too long |
Logging
| Function | Description |
|----------|-------------|
| createPluginLogger(name, minLevel?) | Create a plugin logger with core bridge |
| hasCoreLogger() | Check if running inside Quilltap |
| getLogLevelFromEnv() | Get log level from environment variables |
| createConsoleLogger(prefix, minLevel?) | Create a standalone console logger |
| createNoopLogger() | Create a no-op logger |
OpenAI-Compatible Provider Base Class
Create custom LLM providers for OpenAI-compatible APIs with minimal code:
import { OpenAICompatibleProvider } from '@quilltap/plugin-utils';
// Create a provider for any OpenAI-compatible API
export class MyLLMProvider extends OpenAICompatibleProvider {
constructor() {
super({
baseUrl: 'https://api.my-llm-service.com/v1',
providerName: 'MyLLM',
requireApiKey: true,
attachmentErrorMessage: 'MyLLM does not support file attachments',
});
}
}This gives you a complete LLMProvider implementation with:
- Streaming and non-streaming chat completions
- API key validation
- Model listing
- Proper error handling and logging
Configuration Options:
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| baseUrl | string | (required) | API endpoint URL with version path |
| providerName | string | 'OpenAICompatible' | Name used in log messages |
| requireApiKey | boolean | false | Whether API key is mandatory |
| attachmentErrorMessage | string | (default message) | Error shown for attachment failures |
Note: Requires openai as a peer dependency:
npm install openaiSystem Prompt Plugin Utilities
Create system prompt plugins that provide character prompt templates from .md files:
import { createSystemPromptPlugin } from '@quilltap/plugin-utils';
import type { SystemPromptData } from '@quilltap/plugin-types';
import { readdirSync, readFileSync } from 'node:fs';
import { join, dirname } from 'node:path';
function loadPrompts(): SystemPromptData[] {
const promptsDir = join(dirname(__filename), 'prompts');
return readdirSync(promptsDir)
.filter(f => f.endsWith('.md'))
.map(file => {
const name = file.replace(/\.md$/i, '');
const parts = name.split('_');
const category = parts.pop()!;
const modelHint = parts.join('_');
return {
name,
content: readFileSync(join(promptsDir, file), 'utf-8'),
modelHint,
category,
};
});
}
export const plugin = createSystemPromptPlugin({
metadata: {
pluginId: 'my-prompts',
displayName: 'My System Prompts',
version: '1.0.0',
},
prompts: loadPrompts(),
});| Function | Description |
|----------|-------------|
| createSystemPromptPlugin(options) | Create a system prompt plugin with validation |
| validateSystemPromptPlugin(plugin) | Validate a complete system prompt plugin |
Example: Complete Plugin Provider
import { createPluginLogger, parseOpenAIToolCalls } from '@quilltap/plugin-utils';
import type { LLMProvider, LLMParams, LLMResponse, ToolCallRequest } from '@quilltap/plugin-types';
import OpenAI from 'openai';
const logger = createPluginLogger('qtap-plugin-my-provider');
export class MyProvider implements LLMProvider {
private client: OpenAI;
constructor(apiKey: string) {
this.client = new OpenAI({ apiKey });
logger.debug('Provider initialized');
}
async sendMessage(params: LLMParams, apiKey: string): Promise<LLMResponse> {
logger.debug('Sending message', { model: params.model, messageCount: params.messages.length });
try {
const response = await this.client.chat.completions.create({
model: params.model,
messages: params.messages,
tools: params.tools,
});
// Parse tool calls using the utility
const toolCalls = parseOpenAIToolCalls(response);
logger.info('Received response', {
hasToolCalls: toolCalls.length > 0,
tokens: response.usage?.total_tokens,
});
return {
content: response.choices[0].message.content || '',
toolCalls,
usage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0,
},
};
} catch (error) {
logger.error('Failed to send message', { model: params.model }, error as Error);
throw error;
}
}
}License
MIT - Foundry-9 LLC
