@getfoil/foil-js
v0.8.11
Published
Foil SDK for monitoring and logging AI model invocations
Maintainers
Readme
Foil JavaScript SDK
JavaScript/Node.js SDK for monitoring and logging AI agent invocations with Foil. Supports native distributed tracing with ctx.llmCall() and OpenTelemetry auto-instrumentation.
Installation
npm install @getfoil/foil-jsWizard
Automatically instrument your project:
npx @getfoil/foil-js wizardUse --agent-name <name> to set the agent name, --dry-run to preview changes.
Quick Start: Manual Tracing (Recommended)
Works with any LLM provider — OpenAI, Anthropic, local models, custom APIs.
import { Foil } from '@getfoil/foil-js';
import OpenAI from 'openai';
const openai = new OpenAI();
const foil = new Foil({
apiKey: process.env.FOIL_API_KEY,
agentName: 'my-agent',
});
const result = await foil.trace(async (ctx) => {
const response = await ctx.llmCall('gpt-4o', async () => {
return await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello, world!' }],
});
});
return response.choices[0].message.content;
}, { name: 'greeting' });Quick Start: Auto-Instrumentation
Zero-code tracing for OpenAI, Anthropic, Azure OpenAI, Cohere, Google Generative AI, AWS Bedrock, and LlamaIndex.
import OpenAI from 'openai';
import { Foil } from '@getfoil/foil-js';
// Pass instrumentModules to enable auto-instrumentation
const foil = new Foil({
apiKey: process.env.FOIL_API_KEY,
agentName: 'my-agent',
instrumentModules: { openAI: OpenAI },
});
// Now all OpenAI calls are automatically traced
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});Don't combine
instrumentModuleswithctx.llmCall()for the same provider — it creates duplicate spans.
Trace Options
foil.trace(fn, options) accepts the following options:
await foil.trace(async (ctx) => {
// ...
}, {
name: 'chat-turn', // Name for the root agent span
input: userMessage, // Input to record on the agent span (e.g. user message)
sessionId: 'session-abc', // Session ID for conversation tracking
userId: 'user-123', // End user identifier
userProperties: { plan: 'pro' }, // Additional user attributes
properties: { custom: 'metadata' }, // Custom properties on the span
traceId: 'custom-trace-id', // Custom trace ID (auto-generated if omitted)
timeout: 300000, // Timeout in ms (default: 5 min, 0 to disable)
});Always pass
inputso the agent span captures the user message. Without it, the trace will show an empty input.
Agentic Tool-Calling Loop (Manual Tracing)
Use ctx.executeTools() when the LLM decides which tools to call. It reads tool_calls from the OpenAI response, executes each tool, traces them as TOOL spans, and returns formatted messages for the next call.
const tools = [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: { location: { type: 'string' } },
required: ['location'],
},
},
}];
const toolMap = {
get_weather: async (args) => fetchWeather(args.location),
};
await foil.trace(async (ctx) => {
const messages = [{ role: 'user', content: 'What is the weather in Paris?' }];
let response = await ctx.llmCall('gpt-4o', async () => {
return await openai.chat.completions.create({
model: 'gpt-4o', messages, tools,
});
});
while (response.choices[0].message.tool_calls) {
const toolMessages = await ctx.executeTools(response, toolMap);
messages.push(response.choices[0].message, ...toolMessages);
response = await ctx.llmCall('gpt-4o', async () => {
return await openai.chat.completions.create({
model: 'gpt-4o', messages, tools,
});
});
}
return response.choices[0].message.content;
}, { name: 'weather-agent', input: 'What is the weather in Paris?' });This produces:
Trace: weather-agent
├── llm (gpt-4o) — requests tool calls
│ └── tool (get_weather) — Paris
└── llm (gpt-4o) — synthesizes final answerAgentic Tool-Calling Loop (Auto-Instrumentation)
When using instrumentModules, LLM calls are traced automatically — do NOT wrap them with ctx.llmCall(). But tool executions are NOT automatically traced, so you must use ctx.executeTools() or ctx.tool() to trace them.
import OpenAI from 'openai';
import { Foil } from '@getfoil/foil-js';
const foil = new Foil({
apiKey: process.env.FOIL_API_KEY,
agentName: 'my-agent',
instrumentModules: { openAI: OpenAI },
});
const openai = new OpenAI();
const toolMap = {
get_weather: async (args) => fetchWeather(args.location),
};
await foil.trace(async (ctx) => {
const messages = [{ role: 'user', content: userMessage }];
// LLM call — auto-traced by instrumentModules, no ctx.llmCall() needed
let response = await openai.chat.completions.create({
model: 'gpt-4o', messages, tools,
});
// Tool executions — must be explicitly traced
while (response.choices[0].message.tool_calls) {
const toolMessages = await ctx.executeTools(response, toolMap);
messages.push(response.choices[0].message, ...toolMessages);
response = await openai.chat.completions.create({
model: 'gpt-4o', messages, tools,
});
}
return response.choices[0].message.content;
}, { name: 'chat-turn', input: userMessage });For code-driven tools (not LLM tool_calls), use ctx.tool():
await foil.trace(async (ctx) => {
const config = await ctx.tool('load-config', async () => loadConfig());
// ...
}, { name: 'pipeline', input: query });Span Kinds
import { SpanKind } from '@getfoil/foil-js';
SpanKind.AGENT // Root agent span (automatic for trace())
SpanKind.LLM // Language model calls
SpanKind.TOOL // Tool/function executions
SpanKind.CHAIN // Chain of operations
SpanKind.RETRIEVER // RAG retrieval operations
SpanKind.EMBEDDING // Embedding model calls
SpanKind.CUSTOM // Custom operation typesConvenience Methods
await foil.trace(async (ctx) => {
// LLM call
const response = await ctx.llmCall('gpt-4o', async () => { /* ... */ });
// LLM-driven tool execution (recommended for agentic use)
const toolMessages = await ctx.executeTools(response, toolMap);
// Code-driven tool execution (for fixed pipeline steps)
const data = await ctx.tool('fetch-config', async () => loadConfig());
// Retriever (RAG)
const docs = await ctx.retriever('vector-db', async () => vectorStore.search(query));
// Embedding
const embeddings = await ctx.embedding('text-embedding-3-small', async () => createEmbeddings(texts));
// Signals & feedback
await ctx.recordSignal('response_length', response.length);
await ctx.recordFeedback(true);
await ctx.recordRating(4.5);
});OpenTelemetry Integration
For full control over the OpenTelemetry pipeline:
const { Foil } = require('@getfoil/foil-js/otel');
Foil.init({
apiKey: process.env.FOIL_API_KEY,
agentName: 'my-ai-agent',
});Or use FoilSpanProcessor for manual OTEL setup:
const { FoilSpanProcessor } = require('@getfoil/foil-js/otel');
const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new FoilSpanProcessor({
apiKey: process.env.FOIL_API_KEY,
maxBatchSize: 100,
scheduledDelayMs: 5000,
}));
provider.register();Shutdown
Always shut down gracefully to flush pending spans:
process.on('SIGTERM', async () => {
await foil.shutdown();
process.exit(0);
});
// Or flush manually
await foil.flush();Debug Mode
const foil = new Foil({
apiKey: process.env.FOIL_API_KEY,
agentName: 'my-agent',
debug: true,
});Or set FOIL_DEBUG=true environment variable.
Configuration Options
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| apiKey | string | Yes | - | Your Foil API key |
| agentName | string | No | 'default-agent' | Agent identifier |
| instrumentModules | object | No | - | Module map for auto-instrumentation (e.g., { openAI: OpenAI }) |
| defaultModel | string | No | - | Default model name for spans |
| debug | boolean | No | false | Enable debug logging |
Links
- Full Documentation — API reference, signals, multimodal content, semantic search, experiments, and more
examples/— Complete working examples
License
MIT
