@langinsight/ai-sdk
v0.1.2
Published
LangInsight callback handler for Vercel AI SDK. Sends input/output traces to LangInsight in parallel and supports feedback (thumbs up/down) via `score()`.
Readme
@langinsight/ai-sdk
LangInsight callback handler for Vercel AI SDK. Sends input/output traces to LangInsight in parallel and supports feedback (thumbs up/down) via score().
Usage
generateText
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { LangInsight } from "@langinsight/ai-sdk";
const { LANGINSIGHT_ENDPOINT, LANGINSIGHT_API_KEY } = process.env;
const handler = new LangInsight.CallbackHandler({
apiKey: LANGINSIGHT_API_KEY,
endpoint: LANGINSIGHT_ENDPOINT,
metadata: { userId: "admin", sessionId: "admin" },
onSuccess: (input, output) => {
// traceId の取得・DB永続化はここで
console.log("Trace sent:", output.id);
},
});
const prompt = "Hello!";
const result = await generateText({
model: openai("gpt-4o"),
prompt,
});
await handler.report(result, { input: prompt });streamText
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { LangInsight } from "@langinsight/ai-sdk";
const { LANGINSIGHT_ENDPOINT, LANGINSIGHT_API_KEY } = process.env;
const handler = new LangInsight.CallbackHandler({
apiKey: LANGINSIGHT_API_KEY,
endpoint: LANGINSIGHT_ENDPOINT,
metadata: { userId: "admin", sessionId: "admin" },
onSuccess: (input, output) => console.log("Trace sent:", output.id),
onFailure: (err) => console.error("Trace failed:", err),
});
const prompt = "Hello!";
const result = streamText({
model: openai("gpt-4o"),
prompt,
onFinish: handler.onFinish({ input: prompt }),
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}会話ループ(Ollama でダミー trace を生成)
Ollama 同士で会話を繰り返し、各ターンの input/output を plugin 経由で Core API (trace) に送る example。
cd plugins/typescript/ai-sdk
LANGINSIGHT_ENDPOINT=http://localhost:3000 LANGINSIGHT_API_KEY=xxx bun run examples/conversation-loop.ts
# オプション: TURNS=5(デフォルト5), SEED_PROMPT=最初のユーザー発話各ターンで handler.report(result) が呼ばれ、trace が Core API に送信される。
Feedback (score)
Trace ID は handler.result で取得し、その output.id を score() に渡してフィードバックを送信する。DB 永続化などは onSuccess コールバックでも行える。
const handler = new LangInsight.CallbackHandler({
...options,
});
// report() または onFinish の後に
const { input, output } = await handler.result;
await handler.score({ traceId: output.id, value: 1 }); // 👍
await handler.score({ traceId: output.id, value: -1 }); // 👎Options
| Option | Type | Required | Description |
| ----------- | --------------------------------- | -------- | --------------------------------------------------------------------------- |
| apiKey | string | Yes | LangInsight API key |
| endpoint | string | Yes | LangInsight API endpoint (e.g. https://api.langinsight.example.com) |
| metadata | object | Yes | Metadata attached to traces |
| metadata.userId | string | Yes | User ID |
| metadata.sessionId| string | Yes | Session ID (e.g. conversation id) |
| metadata.modelName | string | No | Model name (defaults to result response when not set) |
| metadata.* | any | No | Additional key-value pairs |
| onSuccess | (input: Trace, output: Trace) => void | No | Callback invoked with both input and output traces when trace submission succeeds. Use output.id to get traceId and persist to DB. |
| onFailure | (error: Error) => void | No | Callback invoked with the error when trace submission fails |
API
report(result, { input })— Sends traces for agenerateTextresult. Both input and output traces are passed toonSuccesson success.onFinish({ input })— Returns a function suitable forstreamText'sonFinish. Sends traces when the stream finishes; both traces are passed toonSuccess.score({ traceId, value })— Submits feedback (1 = thumbs up, -1 = thumbs down) for the given trace. Use(await handler.result).output.idastraceId.result— A promise that resolves with{ input: Trace, output: Trace }when traces are sent successfully.
