npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@timemachine-sdk/sdk

v0.1.1

Published

Time Machine SDK for AI Agent Observability — capture, debug, and replay AI agent executions

Readme


Why Time Machine?

AI agents are hard to debug. When a 15-step agent workflow fails at step 12, you don't want to re-run the entire thing. Time Machine captures every execution step — LLM calls, tool invocations, decisions — so you can inspect what happened, fork from any step, and replay with modifications.

  • 3 lines to integrate — drop-in for any TypeScript agent
  • Zero-overhead in production — fail-open design, async batching, never crashes your app
  • LangChain native — automatic capture via callback handler, zero manual instrumentation

Install

npm install @timemachine-sdk/sdk

Quick Start

import { TimeMachine } from '@timemachine-sdk/sdk';

const tm = new TimeMachine({ apiKey: 'tm_...' });
const execution = await tm.startExecution({ name: 'my-agent-run' });

// Record an LLM call
const step = execution.step('llm_call', { model: 'gpt-4o', prompt: 'Analyze this data...' });
const result = await yourLLMCall();
await step.complete({ output: { text: result }, tokensIn: 150, tokensOut: 300 });

// Record a tool use
const toolStep = execution.step('tool_use', { tool: 'web_search', query: 'latest news' });
const searchResults = await webSearch('latest news');
await toolStep.complete({ output: { results: searchResults } });

await execution.complete();
// View at https://app.timemachine.dev

LangChain Integration

Automatically capture every LLM call, tool invocation, and agent decision — zero manual instrumentation.

import { TimeMachine } from '@timemachine-sdk/sdk';
import { createLangChainHandler } from '@timemachine-sdk/sdk/adapters';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor } from 'langchain/agents';

const tm = new TimeMachine({ apiKey: 'tm_...' });

// One-liner: creates execution + callback handler together
const { handler, execution } = await createLangChainHandler(tm, {
  name: 'research-agent',
  metadata: { model: 'gpt-4o' },
});

const agent = new AgentExecutor({ /* your agent config */ });

// Pass the handler — everything is captured automatically
await agent.invoke(
  { input: 'Research the latest AI papers' },
  { callbacks: [handler] }
);

await execution.complete();

What gets captured automatically:

  • LLM calls with token counts, costs, and model info
  • Tool invocations with inputs and outputs
  • Agent decisions and reasoning
  • Retriever queries and returned documents
  • Errors at any step

Generic Wrapper

Wrap any agent framework with manual step recording:

import { TimeMachine } from '@timemachine-sdk/sdk';

const tm = new TimeMachine({ apiKey: 'tm_...' });

async function runAgent(query: string) {
  const execution = await tm.startExecution({
    name: 'custom-agent',
    metadata: { query },
  });

  try {
    // Record each step of your agent loop
    const planStep = execution.step('decision', { action: 'plan' });
    const plan = await generatePlan(query);
    await planStep.complete({ output: { plan } });

    for (const task of plan.tasks) {
      const taskStep = execution.step('tool_use', { tool: task.tool, args: task.args });
      const result = await executeTool(task);
      await taskStep.complete({
        output: { result },
        tokensIn: result.usage?.input,
        tokensOut: result.usage?.output,
      });
    }

    await execution.complete();
  } catch (error) {
    await execution.fail(error as Error);
  }
}

Configuration

TimeMachine constructor options

| Option | Type | Default | Description | |--------|------|---------|-------------| | apiKey | string | (required) | Your project API key (tm_...). Get one at app.timemachine.dev. | | baseUrl | string | https://api.timemachine.dev | API endpoint. Override for self-hosted or local development. | | maxRetries | number | 3 | Max retry attempts for failed API calls. Uses exponential backoff. | | debug | boolean | false | Log SDK activity to console. Useful for development. |

startExecution options

| Option | Type | Description | |--------|------|-------------| | name | string | Human-readable name for this execution run. | | metadata | Record<string, unknown> | Arbitrary metadata (model, version, environment, etc.). |

Step types

| Type | Description | |------|-------------| | llm_call | LLM/chat model invocation | | tool_use | Tool or function call | | decision | Agent routing or planning decision | | retrieval | RAG or document retrieval | | human_input | Human-in-the-loop input | | transform | Data transformation step | | custom | Anything else |

Sub-path Exports

Import only what you need to keep your bundle small:

// Core SDK
import { TimeMachine, Execution, StepRecorder } from '@timemachine-sdk/sdk';

// LangChain adapter (only loads if @langchain/core is installed)
import { TimeMachineCallbackHandler, createLangChainHandler } from '@timemachine-sdk/sdk/adapters';

// Utility functions (cost calculation, token extraction)
import { calculateCost, MODEL_PRICING } from '@timemachine-sdk/sdk/utils';

Cost Tracking

The SDK includes a built-in pricing table for 30+ models (OpenAI, Anthropic, Google, Mistral, Cohere) and automatically calculates costs from token usage.

import { calculateCost, hasModelPricing } from '@timemachine-sdk/sdk/utils';

// Check if a model has known pricing
hasModelPricing('gpt-4o'); // true

// Calculate cost
calculateCost('gpt-4o', 1000, 500); // $0.00625

Fail-Open Design

The SDK is designed to never crash your application. All API calls are wrapped with error handling — if Time Machine's API is down or your key is invalid, your agent keeps running. Errors are silently logged when debug: true is set.

Dashboard

View and debug your executions at app.timemachine.dev:

  • Execution timeline — step-by-step view of every agent run
  • Fork & replay — branch from any step and re-run with modifications
  • Visual diffs — compare original vs. replayed executions side by side
  • Cost analytics — track token usage and costs across runs

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT