npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

fn-ai

v0.6.0

Published

AI agent toolkit with pluggable provider, tool, and session store layers. ESM, TypeScript-first.

Downloads

1,618

Readme

fn-ai

AI agent toolkit with pluggable provider, tool, and session store layers.

Minimal surface, strict TypeScript, Bun-first. Built on top of the Vercel AI SDK but keeps it entirely internal — you describe providers with baseUrl + apiKey, define tools with TypeBox, persist sessions to memory / file / SQLite / your own backend.

bun add fn-ai
# or
npm install fn-ai

Quick start

import { createFn } from 'fn-ai';

const fn = createFn({
  providers: {
    anthropic: { type: 'anthropic-messages' }, // reads ANTHROPIC_API_KEY
  },
  models: {
    default: { provider: 'anthropic', model: 'claude-sonnet-4-6' },
  },
  defaults: { model: 'default' },
});

const r = await fn.prompt('What is 2 + 2?');
console.log(r.text);

Concepts

| Term | Meaning | | ----------------- | ---------------------------------------------------------------------------------- | | Fn | Your app's AI instance. Holds presets (providers, models, tools, toolsets, store). | | Provider | A connection config — type + apiKey + optional baseUrl. | | Model | A reference to a specific modelId under a provider. | | Tool | A named function the model can call, with TypeBox-validated args. | | Agent | A stateful conversation. Runs the agent loop (model ↔ tool ↔ model) internally. | | Session Store | Pluggable persistence for agent messages. Memory / file / SQLite / custom. |

Providers

Three protocol types are supported. Multiple providers can share the same type with different endpoints.

createFn({
  providers: {
    anthropic: { type: 'anthropic-messages' },
    openai: { type: 'openai-responses' }, // new Responses API
    legacy: { type: 'openai-completions' }, // old Chat Completions
    ollama: { type: 'openai-completions', baseUrl: 'http://localhost:11434/v1', apiKey: 'ollama' },
    deepseek: { type: 'openai-completions', baseUrl: 'https://api.deepseek.com/v1' },
  },
  models: {
    default: { provider: 'anthropic', model: 'claude-sonnet-4-6' },
    fast: { provider: 'anthropic', model: 'claude-haiku-4-5-20251001' },
    gpt: { provider: 'openai', model: 'gpt-4o' },
    local: { provider: 'ollama', model: 'llama3' },
  },
});

API keys fall back to environment variables:

  • anthropic-messagesANTHROPIC_API_KEY
  • openai-responses / openai-completionsOPENAI_API_KEY

System prompts

Every place that accepts system (createFn({ defaults }), fn.prompt(), agents: { ... }, inline agent/session options) takes string | SystemPart[].

Use the array form when you need per-segment cache control (Anthropic prompt caching) or provider-specific overrides:

fn.agent({
  system: [
    { text: longPreamble, cache: { type: 'ephemeral', ttl: '1h' } }, // cache 1h
    { text: runbook,      cache: true },                              // short cache (5m)
    { text: dynamicFooter },                                          // never cached
  ],
}).session();

Each SystemPart becomes its own system message at the provider layer. cache compiles to providerOptions.anthropic.cacheControl — ignored silently by providers that don't support prompt caching. For anything exotic, drop down to the providerOptions escape hatch:

{ text: '...', providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } } }

A plain string still works everywhere — nothing changes for existing code.

Single-shot: fn.prompt()

No state, no tool loop — one model call.

const r = await fn.prompt('Write a haiku', { model: 'fast', maxTokens: 128 });
r.text; // string
r.message; // AssistantMessage
r.usage; // { inputTokens, outputTokens }

Streaming: pass onEvent. The Promise still resolves to the full result.

await fn.prompt('Write a poem', {
  onEvent: (e) => {
    if (e.type === 'text-delta') process.stdout.write(e.delta);
  },
});

Multi-turn: fn.agent().session()

fn.agent(name?) returns an agent handle (no I/O, no model call). Call handle.session() to open a stateful Session that runs the agent loop.

// 1. Handle from a preset (registered via `createFn({ agents })`)
const researcher = fn.agent('researcher');

// 2. Or inline — anonymous handle
const adhoc = fn.agent({
  model: 'default',
  system: 'You are a senior developer.',
  tools: ['search', 'bash'], // or toolset: 'coding'
  maxTurns: 20,
});

// 3. Open a session. Inline options shallow-merge over the handle's preset.
const session = await researcher.session({ sessionId: 'chat-abc' });

// Each send is one user turn; the loop runs until the model stops calling tools.
const r1 = await session.send('Find all TODOs in src/');
const r2 = await session.send('Fix the most critical one');

// Runtime config via instance properties (not send options)
session.model = 'fast';
session.system = 'Be concise.';
session.tools = ['search'];

// Streaming
await session.send('Summarize', {
  onEvent: (e) => {
    if (e.type === 'text-delta') process.stdout.write(e.delta);
  },
});

// Inspect state
console.log(session.messages, session.sessionId);

session.close();

Recovery

Sessions bind to a sessionId. If a store is configured, messages are persisted automatically.

// New session (auto-generated id)
const s1 = await fn.agent().session();

// New session with explicit id (throws if id exists in store)
const s2 = await fn.agent().session({ sessionId: 'chat-abc' });

// Resume — throws if not found
const s3 = await fn.agent().session({ sessionId: 'chat-abc', mode: 'resume' });

// Open — resume if exists, otherwise create
const s4 = await fn.agent().session({ sessionId: 'chat-abc', mode: 'open' });

Persistence strategies

persist: 'auto'; // default — append/save after each send
persist: 'manual'; // call session.save() / session.flush() yourself
persist: 'off'; // in-memory only, even if a store is configured

Named presets (agents)

Register reusable agent configurations on createFn() and reference them by name. Inline options shallow-merge over preset fields (inline values win when not undefined; tools arrays replace rather than concatenate). Session-identity fields (sessionId / name / metadata) are not part of a preset.

const fn = createFn({
  providers: { anthropic: { type: 'anthropic-messages' } },
  models: {
    fast: { provider: 'anthropic', model: 'claude-haiku-4-5-20251001' },
    smart: { provider: 'anthropic', model: 'claude-sonnet-4-6' },
  },
  tools: { search: searchTool, bash: bashTool },
  agents: {
    researcher: {
      model: 'smart',
      system: 'You are a careful researcher. Cite sources.',
      tools: ['search'],
      maxTurns: 10,
    },
    coder: {
      model: 'smart',
      tools: ['search', 'bash'],
      beforeToolCall: (ctx) => {
        if (/rm\s+-rf/.test(String((ctx.args as { command?: string }).command))) {
          return { block: true, reason: 'destructive command' };
        }
      },
    },
  },
  defaults: { model: 'fast', agent: 'researcher' },
});

// Load the preset as a handle
const researcher = fn.agent('researcher');

// Open a session — no overrides
await researcher.session();

// Open a session with per-call overrides
await researcher.session({ model: 'fast', maxTurns: 3, sessionId: 'run-42' });

// Anonymous inline handle
await fn.agent({ model: 'fast', system: 'Be punchy.' }).session();

// fn.agent() with no args uses defaults.agent
await fn.agent().session();

Managing sessions via a handle

Every handle also exposes the bound store's per-agent session CRUD — analogous to fn.store() but scoped to the store the handle is bound to:

const researcher = fn.agent('researcher');

await researcher.list({ limit: 10 });
await researcher.get('chat-abc');
await researcher.exists('chat-abc');
await researcher.update('chat-abc', { name: 'Renamed', metadata: { starred: true } });
await researcher.delete('chat-abc');

Tools

Tools use TypeBox for schema. Arguments are validated before execute is called.

import { defineTool, Type } from 'fn-ai';

const searchTool = defineTool({
  name: 'search',
  description: 'Search the web',
  parameters: Type.Object({
    query: Type.String({ description: 'Search query' }),
    limit: Type.Optional(Type.Number()),
  }),
  execute: async (_id, params, signal) => {
    const results = await search(params.query, { signal });
    return { content: [{ type: 'text', text: JSON.stringify(results) }] };
  },
});

Tool results can be text or images:

return {
  content: [
    { type: 'text', text: 'Here is the chart:' },
    { type: 'image', data: base64, mimeType: 'image/png' },
  ],
};

Built-in tools (fn-ai/tools)

Nine battle-tested tools (file I/O, search, shell, network) ship in the optional subpath — import only what you need. Register them on createFn({ tools }) and agents reference them by name.

import { createFn } from 'fn-ai';
import { createAllTools, tools } from 'fn-ai/tools';

// Option A — zero-config defaults (bound to process.cwd())
const fn = createFn({
  providers: { anthropic: { type: 'anthropic-messages' } },
  models: { default: { provider: 'anthropic', model: 'claude-sonnet-4-6' } },
  tools: {
    read: tools.read,
    grep: tools.grep,
    bash: tools.bash,
    webFetch: tools.webFetch,
  },
  toolsets: { coder: ['read', 'grep', 'bash', 'webFetch'] },
  defaults: { model: 'default' },
});
const agent = await fn.agent({ toolset: 'coder' });

// Option B — factory with explicit cwd
const built = createAllTools('/srv/project');
const fn2 = createFn({
  providers: { anthropic: { type: 'anthropic-messages' } },
  models: { default: { provider: 'anthropic', model: 'claude-sonnet-4-6' } },
  tools: { ...built }, // { read, write, edit, grep, find, ls, bash, webFetch, webSearch }
  defaults: { model: 'default' },
});

| Tool | Description | | --- | --- | | read | Read a file (text with line numbers, images as image content, PDFs excluded). Supports offset/limit with continuation hints. | | write | Overwrite a file. createDirs: true makes missing parents. | | edit | Apply one or more { oldText, newText } replacements. Preserves BOM/CRLF. Accepts legacy { old_string, new_string }. | | grep | Search file contents via ripgrep (downloaded on demand). Regex/literal, context, glob, gitignore-aware. | | find | Find files by glob via fd (downloaded on demand). | | ls | List a directory (non-recursive). | | bash | Run a shell command. Output over 50KB spills to a temp file and the path is returned. | | webFetch | Fetch a URL, convert HTML to Markdown (via turndown + linkedom). | | webSearch | Web search via html.duckduckgo.com — no API key required. Use createBaiduOperations() as an alternate backend in networks where DuckDuckGo is unreachable. |

Presets on the tools namespace: fileTools, readOnlyTools, shellTools, webTools, all. Aggregate factories: createAllTools(cwd, opts?), createCodingTools(cwd, opts?), createReadOnlyTools(cwd, opts?).

Operations SPI — every tool exposes an operations option letting you swap the underlying I/O (file system, spawn, fetch, search source). Use it for SSH-backed agents, sandboxes, or tests.

Security — the built-in tools run with your current process's permissions. If you expose bash, edit, or write to a model you don't trust, wrap the agent with beforeToolCall to whitelist commands and paths.

Binary dependenciesgrep and find use ripgrep/fd. On first use the tools probe your PATH, then download the matching platform archive from GitHub Releases into ~/.fn/bin/. Set FN_OFFLINE=1 to disable downloads.

Session stores

Register named stores at Fn creation; agents pick one by name.

import { createFn, memoryStore, fileStore, sqliteStore } from 'fn-ai';

const fn = createFn({
  // ...providers, models, etc.
  stores: {
    memory: memoryStore(), // in-memory, single process
    persistent: fileStore({ dir: '~/.fn/sessions', lock: true }),
    archive: sqliteStore({ path: '~/.fn/archive.db' }),
  },
  defaults: { store: 'persistent' }, // fallback when agent omits `store`
});

// Each agent picks its own store
await fn.agent({ store: 'memory' }); // ephemeral chat
await fn.agent({ store: 'persistent' }); // long-lived
await fn.agent({ store: customStore }); // inline instance, bypassing names
await fn.agent({}); // uses defaults.store
await fn.agent({ persist: 'off' }); // never write, even if a store is bound

Built-in implementations:

  • memoryStore() — in-memory Map, single process, lost on exit
  • fileStore({ dir, lock }) — JSON files, optional proper-lockfile for cross-process safety
  • sqliteStore({ path })bun:sqlite with WAL mode for automatic write serialization

Custom backends implement the SessionStore interface:

import type { SessionStore } from 'fn-ai';

const redisStore: SessionStore = {
  async create(s) {
    /* ... */
  },
  async load(id) {
    /* ... */
  },
  async exists(id) {
    /* ... */
  },
  async append(id, messages) {
    /* ... */
  },
  async save(id, session) {
    /* ... */
  },
  async update(id, patch) {
    /* ... */
  },
  async list(filter) {
    /* ... */
  },
  async delete(id) {
    /* ... */
  },
};

Cross-session operations

fn.store(name?) returns a read-only management facade. Omit name to use defaults.store.

await fn.store('persistent').list({ limit: 20, orderBy: 'updatedAt', order: 'desc' });
await fn.store('persistent').get('chat-abc');
await fn.store('persistent').exists('chat-abc');
await fn.store('persistent').update('chat-abc', { name: 'Code review' });
await fn.store('persistent').delete('chat-abc');

// Default store
await fn.store().list();

The facade only exposes management operations — agent-internal methods (create/append/save/load) stay hidden.

Hooks

Inject logic around tool calls and model input:

const agent = await fn.agent({
  beforeToolCall: async (ctx) => {
    if (ctx.toolName === 'bash' && /rm\s+-rf/.test(String((ctx.args as any).command))) {
      return { block: true, reason: 'Dangerous command' };
    }
  },
  afterToolCall: async (ctx) => {
    logger.info(`[${ctx.toolName}] ${ctx.durationMs}ms`);
  },
  transformContext: (messages) => {
    // Context window management — e.g. keep only last 50 messages
    return messages.slice(-50);
  },
});

Concurrency

  • Same process, same sessionId: opening a second agent throws SessionInUseError.
  • Different processes, same sessionId: handled by the store implementation (fileStore uses proper-lockfile, sqliteStore uses WAL mode).

Error types

All errors extend FnError with a code field. Key classes:

  • ModelError and subclasses: ModelAuthError, ModelRateLimitError, ModelTimeoutError, ModelAbortError, ModelRequestError, ModelResponseError
  • ToolError and subclasses: ToolNotFoundError, ToolValidationError, ToolExecutionError, ToolBlockedError
  • SessionExistsError, SessionNotFoundError, SessionInUseError, StoreError
  • AgentConfigError, MaxTurnsExceededError
import { ModelRateLimitError, MaxTurnsExceededError } from 'fn-ai';

try {
  await agent.send('…');
} catch (err) {
  if (err instanceof ModelRateLimitError) await wait(1000);
  else if (err instanceof MaxTurnsExceededError) console.warn('loop limit hit');
  else throw err;
}

Requirements

  • Bun 1.1+ or Node 20+
  • TypeScript 5+

License

MIT