@mode-7/tracelm
v0.1.0
Published
TypeScript SDK for TraceLM - LLM observability and logging
Downloads
15
Maintainers
Readme
@mode-7/tracelm
TypeScript SDK for TraceLM - LLM observability and logging.
Installation
npm install @mode-7/tracelmQuick Start
import { TraceLM } from "@mode-7/tracelm";
const tracelm = new TraceLM({
apiKey: process.env.TRACELM_API_KEY!,
applicationId: process.env.TRACELM_APP_ID!,
});
// Simple logging
await tracelm.info("User signed up");
await tracelm.error("Something went wrong");
// LLM observability
await tracelm.agent("Chat completion", {
model: "gpt-4",
provider: "openai",
messages: [
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hi!" },
],
});Features
- Simple logging -
debug,info,warn,error,fatalmethods - LLM observability - Track model usage, tokens, costs, and conversations
- Automatic enrichment - Token estimation, cost calculation, span extraction
- Context helpers -
withUser,withTrace,withSession,withMetadata - Batching - Optional event batching for high-volume applications
- TypeScript - Fully typed with inline documentation
Configuration
const tracelm = new TraceLM({
// Required
apiKey: "your-api-key",
applicationId: "your-app-id",
// Optional
baseUrl: "https://api.tracelm.com", // Custom API URL
environment: "production", // Environment name
timeout: 10000, // Request timeout (ms)
throwOnError: false, // Throw on API errors
batching: false, // Enable event batching
batchInterval: 1000, // Batch flush interval (ms)
batchSize: 100, // Max batch size
});Logging Methods
Simple Logs
await tracelm.debug("Detailed debug info");
await tracelm.info("User action completed");
await tracelm.warn("Rate limit approaching");
await tracelm.error("Operation failed");
await tracelm.fatal("Critical system failure");With Options
await tracelm.info("User signed up", {
user: { id: "user_123", email: "[email protected]" },
metadata: { plan: "pro" },
trace_id: "signup-flow-abc",
});Error Logging
try {
await riskyOperation();
} catch (err) {
await tracelm.error(err); // Accepts Error objects
}LLM Events
Basic Usage
await tracelm.agent("Chat completion", {
model: "gpt-4",
provider: "openai",
messages: [
{ role: "system", content: "You are helpful." },
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hi there!" },
],
});With Full Metrics
const startTime = Date.now();
const response = await openai.chat.completions.create({...});
await tracelm.agent("Chat completion", {
model: response.model,
provider: "openai",
input_tokens: response.usage?.prompt_tokens,
output_tokens: response.usage?.completion_tokens,
latency_ms: Date.now() - startTime,
messages: messages,
output: response.choices[0].message.content,
});Tool Calls (Spans Auto-Extracted)
await tracelm.agent("Tool-assisted response", {
model: "gpt-4",
provider: "openai",
messages: [
{ role: "user", content: "What's the weather in London?" },
{
role: "assistant",
content: null,
tool_calls: [
{
id: "call_1",
type: "function",
function: { name: "get_weather", arguments: '{"city":"London"}' },
},
],
},
{ role: "tool", content: '{"temp":18}', tool_call_id: "call_1" },
{ role: "assistant", content: "It's 18°C in London." },
],
});Context Helpers
Chain context methods to attach default values to all events:
const logger = tracelm
.withUser({ id: "user_123" })
.withTrace("request-abc")
.withSession("session-xyz")
.withMetadata({ version: "1.2.3" });
// All events include the attached context
await logger.info("User action");
await logger.agent("LLM call", { model: "gpt-4", provider: "openai" });Batching
For high-volume applications, enable batching to reduce API calls:
const tracelm = new TraceLM({
apiKey: "...",
applicationId: "...",
batching: true,
batchSize: 50,
batchInterval: 2000,
});
// Events are batched automatically
tracelm.info("Event 1");
tracelm.info("Event 2");
// Flush before shutdown
await tracelm.flush();TypeScript Types
All types are exported for use in your application:
import type {
TraceLMEvent,
TraceLMLLM,
LLMMessage,
TraceLMUser,
LogLevel,
} from "@tracelm/sdk";Automatic Backend Computation
TraceLM automatically computes these fields server-side if not provided:
- Token counts - Estimated via tiktoken from messages/output
- Cost - Calculated from provider pricing tables
- Previews - Extracted from message content
- Spans - Auto-extracted from messages with tool calls
- Security scans - PII detection, injection detection
- Bot detection - From request context
This means you can send minimal payloads and let TraceLM handle the rest:
// Minimal - TraceLM computes tokens, cost, previews
await tracelm.agent("Chat", {
model: "gpt-4",
provider: "openai",
messages: [...],
});
// Full - Use your own values
await tracelm.agent("Chat", {
model: "gpt-4",
provider: "openai",
input_tokens: 150,
output_tokens: 89,
cost: 0.0045,
latency_ms: 1200,
messages: [...],
});License
MIT
