npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@firstflow/server

v0.0.1-alpha.0

Published

Firstflow Node SDK — cloud forwarding, LLM wrap, self-owned analytics

Downloads

108

Readme

@firstflow/server

Node SDK for Firstflow Cloud: OpenAI + Claude instrumentation (PostHog $ai_generation via OpenTelemetry: @opentelemetry/instrumentation-openai + @traceloop/instrumentation-anthropic), conversation forwarding, browser JWT minting, and generic PostHog capture — all fire-and-forget on the LLM hot path.

Canonical analytics notes: see C:\Users\ASUS\.claude\plans\very-nice-so-let-robust-hammock.md (if present) and the v2 plan tidy-seeking-feather.md.

Environment variables

| Variable | Purpose | |----------|---------| | FIRSTFLOW_JWT_SECRET | Required for issueClientToken(). HS256 signing secret for browser JWTs. MUST be the same value as the cloud\u2019s FIRSTFLOW_JWT_SECRET. If this doesn\u2019t match, all browser tokens will be rejected by the realtime gateway. | | FIRSTFLOW_POSTHOG_KEY | PostHog project API key (phc_... from Project settings — not a personal API key). With FIRSTFLOW_POSTHOG_HOST, enables OTel → PostHog LLM spans and posthog-node for track / identify. | | FIRSTFLOW_POSTHOG_HOST | Regional ingest host, e.g. https://us.i.posthog.com or https://eu.i.posthog.com (must match your project region). | | FIRSTFLOW_CAPTURE_LLM_CONTENT | true / false (default false). When false, prompts/completions are not sent to PostHog: the span redactor strips gen_ai.input.* / gen_ai.output.* (and legacy gen_ai.prompt* / gen_ai.completion*), and Anthropic’s traceContent + OpenAI’s captureMessageContent stay off — $ai_input / $ai_output in the UI stay empty by design. Set to true when you explicitly want message bodies in LLM analytics (PII risk). | | FIRSTFLOW_OTEL_DISTINCT_ID | Optional. Sets OpenTelemetry resource posthog.distinct_id for LLM exports (PostHog’s recommended hook). Defaults to firstflow:<workspaceId> from new Firstflow({ workspaceId }). |

Quick start

import OpenAI from "openai";
import { Firstflow } from "@firstflow/server";

const ff = new Firstflow({
  apiKey: process.env.FIRSTFLOW_SERVER_SECRET!,
  workspaceId: "ws_acme",
  // baseUrl optional — defaults to production `https://api.firstflow.app` (`DEFAULT_FIRSTFLOW_BASE_URL`).
  // jwtSecret — pass explicitly or rely on FIRSTFLOW_JWT_SECRET env var (must match cloud).
});

const ai = ff.wrap(new OpenAI({ apiKey: process.env.OPENAI_API_KEY! }));

const res = await ai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello" }],
  firstflow: { userId: "user_123", conversationId: "conv_1" },
});

await ff.shutdown();

Browser token (same JSON shape as @firstflow/react FirstflowTokenResponse)

Requires FIRSTFLOW_JWT_SECRET env var (or jwtSecret constructor option) — must match the cloud\u2019s secret.

const tokenJson = await ff.issueClientToken({
  userId: "user_123",
  traits: { plan: "pro" },
});

Peer dependencies

  • openai >=4optional; install for OpenAI + PostHog $ai_generation (via @opentelemetry/instrumentation-openai).
  • @anthropic-ai/sdk >=0.36optional; install for Claude. wrap() accepts either peer (or both). With FIRSTFLOW_POSTHOG_KEY / FIRSTFLOW_POSTHOG_HOST, Claude calls emit $ai_generation via Traceloop’s Anthropic instrumentation (same redaction rules as OpenAI when FIRSTFLOW_CAPTURE_LLM_CONTENT is not true).

Claude (messages.create)

import Anthropic from "@anthropic-ai/sdk";
import { Firstflow } from "@firstflow/server";

const ff = new Firstflow({
  apiKey: process.env.FIRSTFLOW_SERVER_SECRET!,
  workspaceId: "ws_acme",
  // jwtSecret: process.env.FIRSTFLOW_JWT_SECRET!  // optional — falls back to env var
});
const claude = ff.wrap(new Anthropic());

await claude.messages.create({
  model: "claude-3-5-haiku-20241022",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello" }],
  firstflow: { userId: "user_123", conversationId: "conv_1" },
});

Sanitizer parity

src/sanitize.ts is a MIRROR of sdk/packages/react/src/analytics/sanitize.ts. Keep them in lockstep.

Manual PostHog smokes (operator)

From sdk/packages/server/ (set secrets in your shell; do not commit them):

$env:OPENAI_API_KEY = "sk-..."
$env:FIRSTFLOW_POSTHOG_KEY = "phc_..."
$env:FIRSTFLOW_POSTHOG_HOST = "https://us.i.posthog.com"
$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "false"   # then "true" to confirm prompt capture
npm run smoke:a
$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "true"
npm run smoke:a
npm run smoke:b

In PostHog → Live events: Smoke A — one $ai_generation, distinct_id u_smoke, groups.workspace ws_smoke, gen_ai.usage.total_tokens set; with capture off, prompt attributes empty/redacted; with capture on, user text visible. Smoke B — exactly one event for the full stream, latency spans the stream.

Anthropic operator smokes (Claude)

$env:ANTHROPIC_API_KEY = "sk-ant-..."
$env:FIRSTFLOW_POSTHOG_KEY = "phc_..."
$env:FIRSTFLOW_POSTHOG_HOST = "https://us.i.posthog.com"
# optional: $env:ANTHROPIC_MODEL = "claude-3-5-haiku-20241022"
npm run smoke:anthropic
npm run smoke:anthropic-stream

These prove wrap() + messages.create + firstflow stripping + optional observe forwarding. With PostHog env vars set, check Live events (filter $ai_generation) on the Anthropic path (one event per non-stream call, one per full stream).

PostHog: $ai_generation shows up but input / output messages are empty

By default FIRSTFLOW_CAPTURE_LLM_CONTENT is not true, so we do not ship prompt/completion payloads to PostHog (privacy). Turn bodies on explicitly:

$env:FIRSTFLOW_CAPTURE_LLM_CONTENT = "true"

Then re-run your smoke or API process. Anthropic (Traceloop) puts gen_ai.input.messages / gen_ai.output.messages on spans when this is on, which PostHog maps to $ai_input / $ai_output_choices. OpenAI (@opentelemetry/instrumentation-openai) still emits much of the text as GenAI log records on the span rather than those attribute keys; PostHog’s UI may stay thinner for OpenAI than for Anthropic until their ingest maps those logs the same way.

You can also set upstream OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true (see OpenAI instrumentation README); @firstflow/server already passes captureMessageContent from FIRSTFLOW_CAPTURE_LLM_CONTENT.

PostHog: “No LLM traces yet” or empty LLM dashboard

  1. Server env names — LLM export uses FIRSTFLOW_POSTHOG_KEY and FIRSTFLOW_POSTHOG_HOST (project phc_... key + regional ingest, e.g. https://us.i.posthog.com). NEXT_PUBLIC_* from the browser app is not read by @firstflow/server. Anthropic/OpenAI smokes do not require PostHog env — the model reply can succeed while export stays off; watch for [@firstflow/server] PostHog LLM export: OFF in the console (smoke scripts print this).
  2. Same process — Set those variables in the same PowerShell / process that runs npm run smoke:* or your API server.
  3. Where to look firstActivity → Live events and filter event name $ai_generation. The AI / LLM product views can stay empty until events arrive or processing catches up; Live events is the quickest check.
  4. Flush — Short scripts must call await ff.shutdown() (the smokes already do) so the OpenTelemetry BatchSpanProcessor exports spans to PostHog’s OTLP endpoint (/i/v0/ai/otel) before exit.
  5. Import order — If @anthropic-ai/sdk is imported before new Firstflow(), the SDK now calls Traceloop’s manuallyInstrument when the module was already in require.cache, so spans still emit; prefer new Firstflow(...) (or a tiny bootstrap file) before other LLM imports when you can.

Smoke C (browser + real token SDK): see test_SDKFIRSTFLOW_USE_REAL_SERVER_SDK=true, npm run dev, open /widget-sandbox?analytics=on, exercise NPS; confirm ff_widget_shown with expected distinct_id / groups.workspace.


Publish (you run this)

npm publish --tag alpha --access public for @firstflow/react and @firstflow/server only after smokes A–C pass on your machine. This agent does not publish to npm.