npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@getfoil/foil-js

v0.8.11

Published

Foil SDK for monitoring and logging AI model invocations

Readme

Foil JavaScript SDK

JavaScript/Node.js SDK for monitoring and logging AI agent invocations with Foil. Supports native distributed tracing with ctx.llmCall() and OpenTelemetry auto-instrumentation.

Installation

npm install @getfoil/foil-js

Wizard

Automatically instrument your project:

npx @getfoil/foil-js wizard

Use --agent-name <name> to set the agent name, --dry-run to preview changes.

Quick Start: Manual Tracing (Recommended)

Works with any LLM provider — OpenAI, Anthropic, local models, custom APIs.

import { Foil } from '@getfoil/foil-js';
import OpenAI from 'openai';

const openai = new OpenAI();
const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-agent',
});

const result = await foil.trace(async (ctx) => {
  const response = await ctx.llmCall('gpt-4o', async () => {
    return await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: 'Hello, world!' }],
    });
  });

  return response.choices[0].message.content;
}, { name: 'greeting' });

Quick Start: Auto-Instrumentation

Zero-code tracing for OpenAI, Anthropic, Azure OpenAI, Cohere, Google Generative AI, AWS Bedrock, and LlamaIndex.

import OpenAI from 'openai';
import { Foil } from '@getfoil/foil-js';

// Pass instrumentModules to enable auto-instrumentation
const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-agent',
  instrumentModules: { openAI: OpenAI },
});

// Now all OpenAI calls are automatically traced
const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Don't combine instrumentModules with ctx.llmCall() for the same provider — it creates duplicate spans.

Trace Options

foil.trace(fn, options) accepts the following options:

await foil.trace(async (ctx) => {
  // ...
}, {
  name: 'chat-turn',                    // Name for the root agent span
  input: userMessage,                   // Input to record on the agent span (e.g. user message)
  sessionId: 'session-abc',             // Session ID for conversation tracking
  userId: 'user-123',                   // End user identifier
  userProperties: { plan: 'pro' },      // Additional user attributes
  properties: { custom: 'metadata' },   // Custom properties on the span
  traceId: 'custom-trace-id',           // Custom trace ID (auto-generated if omitted)
  timeout: 300000,                      // Timeout in ms (default: 5 min, 0 to disable)
});

Always pass input so the agent span captures the user message. Without it, the trace will show an empty input.

Agentic Tool-Calling Loop (Manual Tracing)

Use ctx.executeTools() when the LLM decides which tools to call. It reads tool_calls from the OpenAI response, executes each tool, traces them as TOOL spans, and returns formatted messages for the next call.

const tools = [{
  type: 'function',
  function: {
    name: 'get_weather',
    description: 'Get current weather for a location',
    parameters: {
      type: 'object',
      properties: { location: { type: 'string' } },
      required: ['location'],
    },
  },
}];

const toolMap = {
  get_weather: async (args) => fetchWeather(args.location),
};

await foil.trace(async (ctx) => {
  const messages = [{ role: 'user', content: 'What is the weather in Paris?' }];

  let response = await ctx.llmCall('gpt-4o', async () => {
    return await openai.chat.completions.create({
      model: 'gpt-4o', messages, tools,
    });
  });

  while (response.choices[0].message.tool_calls) {
    const toolMessages = await ctx.executeTools(response, toolMap);
    messages.push(response.choices[0].message, ...toolMessages);

    response = await ctx.llmCall('gpt-4o', async () => {
      return await openai.chat.completions.create({
        model: 'gpt-4o', messages, tools,
      });
    });
  }

  return response.choices[0].message.content;
}, { name: 'weather-agent', input: 'What is the weather in Paris?' });

This produces:

Trace: weather-agent
├── llm (gpt-4o) — requests tool calls
│   └── tool (get_weather) — Paris
└── llm (gpt-4o) — synthesizes final answer

Agentic Tool-Calling Loop (Auto-Instrumentation)

When using instrumentModules, LLM calls are traced automatically — do NOT wrap them with ctx.llmCall(). But tool executions are NOT automatically traced, so you must use ctx.executeTools() or ctx.tool() to trace them.

import OpenAI from 'openai';
import { Foil } from '@getfoil/foil-js';

const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-agent',
  instrumentModules: { openAI: OpenAI },
});

const openai = new OpenAI();

const toolMap = {
  get_weather: async (args) => fetchWeather(args.location),
};

await foil.trace(async (ctx) => {
  const messages = [{ role: 'user', content: userMessage }];

  // LLM call — auto-traced by instrumentModules, no ctx.llmCall() needed
  let response = await openai.chat.completions.create({
    model: 'gpt-4o', messages, tools,
  });

  // Tool executions — must be explicitly traced
  while (response.choices[0].message.tool_calls) {
    const toolMessages = await ctx.executeTools(response, toolMap);
    messages.push(response.choices[0].message, ...toolMessages);

    response = await openai.chat.completions.create({
      model: 'gpt-4o', messages, tools,
    });
  }

  return response.choices[0].message.content;
}, { name: 'chat-turn', input: userMessage });

For code-driven tools (not LLM tool_calls), use ctx.tool():

await foil.trace(async (ctx) => {
  const config = await ctx.tool('load-config', async () => loadConfig());
  // ...
}, { name: 'pipeline', input: query });

Span Kinds

import { SpanKind } from '@getfoil/foil-js';

SpanKind.AGENT      // Root agent span (automatic for trace())
SpanKind.LLM        // Language model calls
SpanKind.TOOL       // Tool/function executions
SpanKind.CHAIN      // Chain of operations
SpanKind.RETRIEVER  // RAG retrieval operations
SpanKind.EMBEDDING  // Embedding model calls
SpanKind.CUSTOM     // Custom operation types

Convenience Methods

await foil.trace(async (ctx) => {
  // LLM call
  const response = await ctx.llmCall('gpt-4o', async () => { /* ... */ });

  // LLM-driven tool execution (recommended for agentic use)
  const toolMessages = await ctx.executeTools(response, toolMap);

  // Code-driven tool execution (for fixed pipeline steps)
  const data = await ctx.tool('fetch-config', async () => loadConfig());

  // Retriever (RAG)
  const docs = await ctx.retriever('vector-db', async () => vectorStore.search(query));

  // Embedding
  const embeddings = await ctx.embedding('text-embedding-3-small', async () => createEmbeddings(texts));

  // Signals & feedback
  await ctx.recordSignal('response_length', response.length);
  await ctx.recordFeedback(true);
  await ctx.recordRating(4.5);
});

OpenTelemetry Integration

For full control over the OpenTelemetry pipeline:

const { Foil } = require('@getfoil/foil-js/otel');

Foil.init({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-ai-agent',
});

Or use FoilSpanProcessor for manual OTEL setup:

const { FoilSpanProcessor } = require('@getfoil/foil-js/otel');
const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');

const provider = new NodeTracerProvider();
provider.addSpanProcessor(new FoilSpanProcessor({
  apiKey: process.env.FOIL_API_KEY,
  maxBatchSize: 100,
  scheduledDelayMs: 5000,
}));
provider.register();

Shutdown

Always shut down gracefully to flush pending spans:

process.on('SIGTERM', async () => {
  await foil.shutdown();
  process.exit(0);
});

// Or flush manually
await foil.flush();

Debug Mode

const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-agent',
  debug: true,
});

Or set FOIL_DEBUG=true environment variable.

Configuration Options

| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| | apiKey | string | Yes | - | Your Foil API key | | agentName | string | No | 'default-agent' | Agent identifier | | instrumentModules | object | No | - | Module map for auto-instrumentation (e.g., { openAI: OpenAI }) | | defaultModel | string | No | - | Default model name for spans | | debug | boolean | No | false | Enable debug logging |

Links

  • Full Documentation — API reference, signals, multimodal content, semantic search, experiments, and more
  • examples/ — Complete working examples

License

MIT