@firstflow/server
v0.0.1-alpha.0
Published
Firstflow Node SDK — cloud forwarding, LLM wrap, self-owned analytics
Downloads
108
Readme
@firstflow/server
Node SDK for Firstflow Cloud: OpenAI + Claude instrumentation (PostHog $ai_generation via OpenTelemetry: @opentelemetry/instrumentation-openai + @traceloop/instrumentation-anthropic), conversation forwarding, browser JWT minting, and generic PostHog capture — all fire-and-forget on the LLM hot path.
Canonical analytics notes: see C:\Users\ASUS\.claude\plans\very-nice-so-let-robust-hammock.md (if present) and the v2 plan tidy-seeking-feather.md.
Environment variables
| Variable | Purpose |
|----------|---------|
| FIRSTFLOW_JWT_SECRET | Required for issueClientToken(). HS256 signing secret for browser JWTs. MUST be the same value as the cloud\u2019s FIRSTFLOW_JWT_SECRET. If this doesn\u2019t match, all browser tokens will be rejected by the realtime gateway. |
| FIRSTFLOW_POSTHOG_KEY | PostHog project API key (phc_... from Project settings — not a personal API key). With FIRSTFLOW_POSTHOG_HOST, enables OTel → PostHog LLM spans and posthog-node for track / identify. |
| FIRSTFLOW_POSTHOG_HOST | Regional ingest host, e.g. https://us.i.posthog.com or https://eu.i.posthog.com (must match your project region). |
| FIRSTFLOW_CAPTURE_LLM_CONTENT | true / false (default false). When false, prompts/completions are not sent to PostHog: the span redactor strips gen_ai.input.* / gen_ai.output.* (and legacy gen_ai.prompt* / gen_ai.completion*), and Anthropic’s traceContent + OpenAI’s captureMessageContent stay off — $ai_input / $ai_output in the UI stay empty by design. Set to true when you explicitly want message bodies in LLM analytics (PII risk). |
| FIRSTFLOW_OTEL_DISTINCT_ID | Optional. Sets OpenTelemetry resource posthog.distinct_id for LLM exports (PostHog’s recommended hook). Defaults to firstflow:<workspaceId> from new Firstflow({ workspaceId }). |
Quick start
import OpenAI from "openai";
import { Firstflow } from "@firstflow/server";
const ff = new Firstflow({
apiKey: process.env.FIRSTFLOW_SERVER_SECRET!,
workspaceId: "ws_acme",
// baseUrl optional — defaults to production `https://api.firstflow.app` (`DEFAULT_FIRSTFLOW_BASE_URL`).
// jwtSecret — pass explicitly or rely on FIRSTFLOW_JWT_SECRET env var (must match cloud).
});
const ai = ff.wrap(new OpenAI({ apiKey: process.env.OPENAI_API_KEY! }));
const res = await ai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
firstflow: { userId: "user_123", conversationId: "conv_1" },
});
await ff.shutdown();Browser token (same JSON shape as @firstflow/react FirstflowTokenResponse)
Requires FIRSTFLOW_JWT_SECRET env var (or jwtSecret constructor option) — must match the cloud\u2019s secret.
const tokenJson = await ff.issueClientToken({
userId: "user_123",
traits: { plan: "pro" },
});Peer dependencies
openai>=4— optional; install for OpenAI + PostHog$ai_generation(via@opentelemetry/instrumentation-openai).@anthropic-ai/sdk>=0.36— optional; install for Claude.wrap()accepts either peer (or both). WithFIRSTFLOW_POSTHOG_KEY/FIRSTFLOW_POSTHOG_HOST, Claude calls emit$ai_generationvia Traceloop’s Anthropic instrumentation (same redaction rules as OpenAI whenFIRSTFLOW_CAPTURE_LLM_CONTENTis nottrue).
Claude (messages.create)
import Anthropic from "@anthropic-ai/sdk";
import { Firstflow } from "@firstflow/server";
const ff = new Firstflow({
apiKey: process.env.FIRSTFLOW_SERVER_SECRET!,
workspaceId: "ws_acme",
// jwtSecret: process.env.FIRSTFLOW_JWT_SECRET! // optional — falls back to env var
});
const claude = ff.wrap(new Anthropic());
await claude.messages.create({
model: "claude-3-5-haiku-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
firstflow: { userId: "user_123", conversationId: "conv_1" },
});Sanitizer parity
src/sanitize.ts is a MIRROR of sdk/packages/react/src/analytics/sanitize.ts. Keep them in lockstep.
Manual PostHog smokes (operator)
From sdk/packages/server/ (set secrets in your shell; do not commit them):
$env:OPENAI_API_KEY = "sk-..."
$env:FIRSTFLOW_POSTHOG_KEY = "phc_..."
$env:FIRSTFLOW_POSTHOG_HOST = "https://us.i.posthog.com"
$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "false" # then "true" to confirm prompt capture
npm run smoke:a
$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "true"
npm run smoke:a
npm run smoke:bIn PostHog → Live events: Smoke A — one $ai_generation, distinct_id u_smoke, groups.workspace ws_smoke, gen_ai.usage.total_tokens set; with capture off, prompt attributes empty/redacted; with capture on, user text visible. Smoke B — exactly one event for the full stream, latency spans the stream.
Anthropic operator smokes (Claude)
$env:ANTHROPIC_API_KEY = "sk-ant-..."
$env:FIRSTFLOW_POSTHOG_KEY = "phc_..."
$env:FIRSTFLOW_POSTHOG_HOST = "https://us.i.posthog.com"
# optional: $env:ANTHROPIC_MODEL = "claude-3-5-haiku-20241022"
npm run smoke:anthropic
npm run smoke:anthropic-streamThese prove wrap() + messages.create + firstflow stripping + optional observe forwarding. With PostHog env vars set, check Live events (filter $ai_generation) on the Anthropic path (one event per non-stream call, one per full stream).
PostHog: $ai_generation shows up but input / output messages are empty
By default FIRSTFLOW_CAPTURE_LLM_CONTENT is not true, so we do not ship prompt/completion payloads to PostHog (privacy). Turn bodies on explicitly:
$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "true"Then re-run your smoke or API process. Anthropic (Traceloop) puts gen_ai.input.messages / gen_ai.output.messages on spans when this is on, which PostHog maps to $ai_input / $ai_output_choices. OpenAI (@opentelemetry/instrumentation-openai) still emits much of the text as GenAI log records on the span rather than those attribute keys; PostHog’s UI may stay thinner for OpenAI than for Anthropic until their ingest maps those logs the same way.
You can also set upstream OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true (see OpenAI instrumentation README); @firstflow/server already passes captureMessageContent from FIRSTFLOW_CAPTURE_LLM_CONTENT.
PostHog: “No LLM traces yet” or empty LLM dashboard
- Server env names — LLM export uses
FIRSTFLOW_POSTHOG_KEYandFIRSTFLOW_POSTHOG_HOST(projectphc_...key + regional ingest, e.g.https://us.i.posthog.com).NEXT_PUBLIC_*from the browser app is not read by@firstflow/server. Anthropic/OpenAI smokes do not require PostHog env — the model reply can succeed while export stays off; watch for[@firstflow/server] PostHog LLM export: OFFin the console (smoke scripts print this). - Same process — Set those variables in the same PowerShell / process that runs
npm run smoke:*or your API server. - Where to look first — Activity → Live events and filter event name
$ai_generation. The AI / LLM product views can stay empty until events arrive or processing catches up; Live events is the quickest check. - Flush — Short scripts must call
await ff.shutdown()(the smokes already do) so the OpenTelemetryBatchSpanProcessorexports spans to PostHog’s OTLP endpoint (/i/v0/ai/otel) before exit. - Import order — If
@anthropic-ai/sdkis imported beforenew Firstflow(), the SDK now calls Traceloop’smanuallyInstrumentwhen the module was already inrequire.cache, so spans still emit; prefernew Firstflow(...)(or a tiny bootstrap file) before other LLM imports when you can.
Smoke C (browser + real token SDK): see test_SDK — FIRSTFLOW_USE_REAL_SERVER_SDK=true, npm run dev, open /widget-sandbox?analytics=on, exercise NPS; confirm ff_widget_shown with expected distinct_id / groups.workspace.
Publish (you run this)
npm publish --tag alpha --access public for @firstflow/react and @firstflow/server only after smokes A–C pass on your machine. This agent does not publish to npm.
