@timemachine-sdk/sdk
v0.1.1
Published
Time Machine SDK for AI Agent Observability — capture, debug, and replay AI agent executions
Maintainers
Readme
Why Time Machine?
AI agents are hard to debug. When a 15-step agent workflow fails at step 12, you don't want to re-run the entire thing. Time Machine captures every execution step — LLM calls, tool invocations, decisions — so you can inspect what happened, fork from any step, and replay with modifications.
- 3 lines to integrate — drop-in for any TypeScript agent
- Zero-overhead in production — fail-open design, async batching, never crashes your app
- LangChain native — automatic capture via callback handler, zero manual instrumentation
Install
npm install @timemachine-sdk/sdkQuick Start
import { TimeMachine } from '@timemachine-sdk/sdk';
const tm = new TimeMachine({ apiKey: 'tm_...' });
const execution = await tm.startExecution({ name: 'my-agent-run' });
// Record an LLM call
const step = execution.step('llm_call', { model: 'gpt-4o', prompt: 'Analyze this data...' });
const result = await yourLLMCall();
await step.complete({ output: { text: result }, tokensIn: 150, tokensOut: 300 });
// Record a tool use
const toolStep = execution.step('tool_use', { tool: 'web_search', query: 'latest news' });
const searchResults = await webSearch('latest news');
await toolStep.complete({ output: { results: searchResults } });
await execution.complete();
// View at https://app.timemachine.devLangChain Integration
Automatically capture every LLM call, tool invocation, and agent decision — zero manual instrumentation.
import { TimeMachine } from '@timemachine-sdk/sdk';
import { createLangChainHandler } from '@timemachine-sdk/sdk/adapters';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor } from 'langchain/agents';
const tm = new TimeMachine({ apiKey: 'tm_...' });
// One-liner: creates execution + callback handler together
const { handler, execution } = await createLangChainHandler(tm, {
name: 'research-agent',
metadata: { model: 'gpt-4o' },
});
const agent = new AgentExecutor({ /* your agent config */ });
// Pass the handler — everything is captured automatically
await agent.invoke(
{ input: 'Research the latest AI papers' },
{ callbacks: [handler] }
);
await execution.complete();What gets captured automatically:
- LLM calls with token counts, costs, and model info
- Tool invocations with inputs and outputs
- Agent decisions and reasoning
- Retriever queries and returned documents
- Errors at any step
Generic Wrapper
Wrap any agent framework with manual step recording:
import { TimeMachine } from '@timemachine-sdk/sdk';
const tm = new TimeMachine({ apiKey: 'tm_...' });
async function runAgent(query: string) {
const execution = await tm.startExecution({
name: 'custom-agent',
metadata: { query },
});
try {
// Record each step of your agent loop
const planStep = execution.step('decision', { action: 'plan' });
const plan = await generatePlan(query);
await planStep.complete({ output: { plan } });
for (const task of plan.tasks) {
const taskStep = execution.step('tool_use', { tool: task.tool, args: task.args });
const result = await executeTool(task);
await taskStep.complete({
output: { result },
tokensIn: result.usage?.input,
tokensOut: result.usage?.output,
});
}
await execution.complete();
} catch (error) {
await execution.fail(error as Error);
}
}Configuration
TimeMachine constructor options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| apiKey | string | (required) | Your project API key (tm_...). Get one at app.timemachine.dev. |
| baseUrl | string | https://api.timemachine.dev | API endpoint. Override for self-hosted or local development. |
| maxRetries | number | 3 | Max retry attempts for failed API calls. Uses exponential backoff. |
| debug | boolean | false | Log SDK activity to console. Useful for development. |
startExecution options
| Option | Type | Description |
|--------|------|-------------|
| name | string | Human-readable name for this execution run. |
| metadata | Record<string, unknown> | Arbitrary metadata (model, version, environment, etc.). |
Step types
| Type | Description |
|------|-------------|
| llm_call | LLM/chat model invocation |
| tool_use | Tool or function call |
| decision | Agent routing or planning decision |
| retrieval | RAG or document retrieval |
| human_input | Human-in-the-loop input |
| transform | Data transformation step |
| custom | Anything else |
Sub-path Exports
Import only what you need to keep your bundle small:
// Core SDK
import { TimeMachine, Execution, StepRecorder } from '@timemachine-sdk/sdk';
// LangChain adapter (only loads if @langchain/core is installed)
import { TimeMachineCallbackHandler, createLangChainHandler } from '@timemachine-sdk/sdk/adapters';
// Utility functions (cost calculation, token extraction)
import { calculateCost, MODEL_PRICING } from '@timemachine-sdk/sdk/utils';Cost Tracking
The SDK includes a built-in pricing table for 30+ models (OpenAI, Anthropic, Google, Mistral, Cohere) and automatically calculates costs from token usage.
import { calculateCost, hasModelPricing } from '@timemachine-sdk/sdk/utils';
// Check if a model has known pricing
hasModelPricing('gpt-4o'); // true
// Calculate cost
calculateCost('gpt-4o', 1000, 500); // $0.00625Fail-Open Design
The SDK is designed to never crash your application. All API calls are wrapped with error handling — if Time Machine's API is down or your key is invalid, your agent keeps running. Errors are silently logged when debug: true is set.
Dashboard
View and debug your executions at app.timemachine.dev:
- Execution timeline — step-by-step view of every agent run
- Fork & replay — branch from any step and re-run with modifications
- Visual diffs — compare original vs. replayed executions side by side
- Cost analytics — track token usage and costs across runs
Contributing
See CONTRIBUTING.md for development setup and guidelines.
