threadline-sdk
v0.1.8
Published
Persistent memory and context layer for AI agents. inject() before your LLM call, update() after. Works with OpenAI, Anthropic, Vercel AI SDK, and LangChain.
Downloads
945
Maintainers
Readme
threadline-sdk
The persistent context layer for AI agents.
Every agent you build starts from zero. Threadline changes that.
Install
npm install threadline-sdkUsage
import { Threadline } from "threadline-sdk"
const tl = new Threadline({ apiKey: process.env.THREADLINE_KEY })
// Before your AI call — inject user context
const { injectedPrompt, cacheHint } = await tl.inject(userId, "You are a helpful assistant.")
// After your AI call — update user context
await tl.update({ userId, userMessage, agentResponse })That's it. Your agent now remembers every user, across every session.
Why Threadline?
Think of it like OAuth — but for context. You don't build auth from scratch. You shouldn't build memory from scratch either.
- One SDK, any LLM (OpenAI, Anthropic, Gemini, Mistral)
- User-owned context — users can view and delete their data
- < 50ms context retrieval via Redis
- Privacy-first by design
API
tl.inject(userId, basePrompt)
Returns { injectedPrompt, cacheHint? }. Use injectedPrompt as your system message. When cacheHint.recommended is true, pass cacheHint.openaiParam as extra_body on supported OpenAI models for 24h prompt cache retention (~50% lower cached input token cost).
tl.update({ userId, userMessage, agentResponse })
Extracts and stores context updates from a conversation turn.
Get started
- Sign up at threadline.to
- Create an agent and get your API key
- Drop in the two lines above
