emberlm
v0.1.1
Published
Official EmberLM SDK for Node.js. Run versioned prompts, evaluate responses, and ship AI with confidence.
Downloads
251
Maintainers
Readme
emberlm
Official Node.js SDK for EmberLM. Run versioned prompts from your production code, evaluate LLM responses, and ship AI with confidence.
Install
npm install emberlmRequires Node.js 18+ (for native fetch).
Quick start
Get an API key from Settings → API Keys
in the dashboard. Keys start with pk_live_.
import { Client } from "emberlm";
const client = new Client({ apiKey: "pk_live_..." });
const result = await client.run("summarize_docs", {
variables: { document: docText },
});
console.log(result.text);
console.log("passed_evals:", result.passed_evals);
console.log("confidence:", result.confidence);
console.log("cost_usd:", result.cost_usd);
console.log("latency_ms:", result.latency_ms);API
new Client(options)
new Client({
apiKey: "pk_live_...",
baseUrl: "https://emberlm.dev", // optional
timeoutMs: 60_000, // optional, defaults to 60s
fetch: globalThis.fetch, // optional, pass your own
});client.listPrompts()
Returns every prompt in the API key's workspace.
const { prompts } = await client.listPrompts();
// prompts: Array<{ id, name, description, model, current_version, tags }>client.getPrompt(name)
Fetch a single prompt by name. Returns the prod-tagged version if one exists,
otherwise the latest.
const prompt = await client.getPrompt("summarize_docs");
// { id, name, version, tag, system_prompt, user_prompt, model, ... }client.run(promptName, options?)
Run a saved prompt. Variables are substituted into {{placeholders}}. All
workspace eval rules are applied to the output. The run is persisted and counts
toward the workspace's analytics.
const result = await client.run("classify_ticket", {
variables: { body: ticket.body },
model: "claude-haiku-4-5", // optional override
});
// result:
// {
// run_id, prompt, version, model,
// text,
// input_tokens, output_tokens, total_tokens,
// cost_usd, latency_ms,
// passed_evals, confidence, evals: [...],
// error,
// }client.eval(options)
Evaluate an arbitrary response against the workspace's active eval rules. Useful when the response was generated by a different SDK or model.
const { passed, confidence, results } = await client.eval({
response: llmResponse,
prompt: userPrompt, // optional
variables: { name: "Jane" }, // optional
ruleIds: ["rule-id-1", "rule-id-2"], // optional, defaults to all active
});Errors
Any non-2xx response throws an EmberLMError:
import { Client, EmberLMError } from "emberlm";
try {
await client.run("missing_prompt");
} catch (err) {
if (err instanceof EmberLMError) {
console.error(err.status, err.message);
}
}| Status | Meaning | |---|---| | 401 | Missing or invalid API key | | 403 | API key's plan does not permit the SDK | | 404 | Prompt not found in the workspace | | 429 | Monthly call limit reached, or rate limited |
Rate limits
100 requests / minute per API key.
License
MIT
