@countly/ai-sdk-core
v0.0.2
Published
Core utilities for Countly AI SDK
Readme
@countly/ai-sdk-core
Shared transport, data model, cost calculation, and event builders for the Countly AI SDK.
This is the foundation package. You usually don't install it directly — every adapter pulls it in automatically:
@countly/ai-sdk-openai@countly/ai-sdk-anthropic@countly/ai-sdk-mastra@countly/ai-sdk-vercel@countly/ai-sdk-google-genai@countly/ai-sdk-langchain@countly/ai-sdk-cohere@countly/ai-sdk-llamaindex
What it provides
- Unified data model (
RawExtractionResult) — OpenTelemetry GenAI-aligned schema produced by every adapter - HTTP transport — buffered
POST /iwith per-eventdevice_id, retry with exponential backoff, dead-letter buffer (max 1000),429 Retry-Aftersupport - Cost calculation — built-in pricing for 30+ models (OpenAI, Anthropic, Google, Cohere) with per-model override
- Event builders —
[CLY]_llm_interaction,[CLY]_llm_tool_used,[CLY]_llm_tool_usage_parameter,[CLY]_llm_interaction_feedback - Finish reason normalization — maps provider-specific values to canonical
stop | length | tool_calls | content_filter | error | other - Error categorization —
rate_limit | context_length | content_filter | timeout | auth_error | api_error
Direct usage
If you're building a custom integration not covered by an existing adapter:
import {
resolveConfig,
createTransport,
buildAllEvents,
generatePromptId,
} from "@countly/ai-sdk-core";
const config = resolveConfig({
appKey: "YOUR_APP_KEY",
url: "https://your-countly-server.com",
});
const transport = createTransport(config);
const raw = {
provider: "my-llm",
model: "my-model",
usage_input: 100,
usage_output: 50,
usage_total: 150,
latency_total: 1200,
status: "success" as const,
};
const events = buildAllEvents(raw, config, {
prompt_id: generatePromptId(),
sdk_adapter: "custom",
});
transport.enqueue(events);Feedback events
Feedback is not auto-collected — wire it from your UI's thumbs-up/down handler:
import { buildFeedbackEvent, createTransport, resolveConfig } from "@countly/ai-sdk-core";
const config = resolveConfig({ appKey: "...", url: "..." });
const transport = createTransport(config);
function onFeedback(promptId: string, rating: string, userId: string) {
const event = buildFeedbackEvent({ prompt_id: promptId, rating });
event.deviceId = userId;
transport.enqueue([event]);
}The prompt_id links back to the [CLY]_llm_interaction event. Store it in your chat UI and read it back when the user clicks feedback.
Full documentation
See the Countly AI SDK repository for the unified data model, observability levels (0/1/2), per-user attribution via AsyncLocalStorage, cost calculation, privacy controls, and Countly plugin integration (Drill, Funnels, Cohorts, APM, Crash Analytics).
License
MIT
