npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@juristechllc/fireworks-serialized-provider

v0.1.1

Published

Otto-owned provider wrapper for @ai-sdk/openai-compatible that enforces strict tool-use serialization (one tool_call per stream). Addresses the fire-and-forget tool-call bug observed with Kimi K2.5 on Fireworks.ai where the model emits additional tool_use

Readme

@juristechllc/fireworks-serialized-provider (P0c)

Otto-owned provider that wraps [@ai-sdk/openai-compatible] and enforces strict tool-use serialization on the LanguageModelV3 stream:

At most one committed tool-call is allowed per upstream stream. If the model emits a second concurrent tool-input-start or a distinct second tool-call, the transform suppresses the extras and emits a synthetic finish part with finishReason: "tool-calls", closing the stream so the OpenCode agent loop must execute the first tool, append the tool-result, and re-issue the completion before any further tool can fire.

Why this exists

From session ses_262cf3fb8ffeIPcZkaGvrgld2Z (Kimi K2.5 on Fireworks.ai):

  • 170 / 196 (87 %) tool calls were issued while a prior call was still unresolved.
  • 99 % of ai.response.started events began while a previous tool call was pending.
  • 85 tool results were orphaned — they arrived after the model had already moved on to subsequent decisions.

This is a fire-and-forget control-flow bug at the model/provider interface. The Anthropic SDK enforces tool-use/tool-result ordering natively; OpenAI- compatible adapters (Fireworks) do not. This wrapper imposes the same invariant at the provider layer, uniformly for every model served through Fireworks' OpenAI-compatible endpoint — so the fix is model-agnostic.

Design constraints honored

  • Does not modify upstream OpenCode. Keeps the engine updatable from the open-source project, per the Otto-OpenWork architecture rule.
  • Does not touch the otto-chrome bridge, extension, cookies, or the 5-minute kill switch. Browser-control routing is unchanged.
  • Does not live in the UI. Serialization runs inside the OpenCode engine process, where it belongs. The Solid UI and human interaction are untouched.
  • Feature-reversible. Reverting the "npm" field in opencode.jsonc back to "@ai-sdk/openai-compatible" restores pre-P0c behavior.
  • Pure and testable. The serialization transform is a pure function of the chunk sequence. Tests run against mock LanguageModelV3 streams with zero network, zero mocks of the base SDK beyond a small module loader hook.

Open-source references

  • Vercel AI SDK stopWhen / stepCountIs commit vercel/ai@9315076 — documents the "stop after N tool calls" pattern at the agent loop. This package enforces it one level lower (provider stream), so it applies uniformly to every agent built on OpenCode.
  • sst/opencode#2720 — "tool_use blocks found without corresponding tool_result blocks": same failure mode this wrapper prevents.
  • cline/cline#9092 — the matching failure on a different agent.
  • BerriAI/litellm#23507 — normalizing duplicate/concurrent tool_use emissions from Kimi K2 on non-Anthropic endpoints.
  • browser-use#4651 / PR #4654 — perception-side retry of the same class of bug.

Installation

This package is published on the public npm registry. OpenCode's provider loader installs it automatically when a workspace's opencode.jsonc references it via the "npm" field:

{
  "provider": {
    "fireworks-ai": {
      "npm": "@juristechllc/fireworks-serialized-provider",
      "options": {
        "baseURL": "https://api.fireworks.ai/inference/v1"
      },
      "models": { "accounts/fireworks/models/kimi-k2p5": {} }
    }
  }
}

Otto-OpenWork's workspace-init step (apps/server/src/workspace-init.ts) performs the rewrite automatically on every workspace open — this is a follow-up PR to this package. Manual installation is not required.

For ad-hoc (non-OpenCode) use in Node:

npm install @juristechllc/fireworks-serialized-provider @ai-sdk/openai-compatible

API

import { createFireworksSerialized } from "@juristechllc/fireworks-serialized-provider";

const sdk = createFireworksSerialized({
  name: "fireworks-ai",
  baseURL: "https://api.fireworks.ai/inference/v1",
  apiKey: process.env.FIREWORKS_API_KEY,
  // Any other @ai-sdk/openai-compatible options (headers, fetch, ...) pass
  // through untouched.
});

const model = sdk.languageModel("accounts/fireworks/models/kimi-k2p5");
// `model` conforms to LanguageModelV3; doStream() output is clipped to one
// tool-call per stream.

The factory export is named createFireworksSerialized so the OpenCode provider loader's discovery heuristic (Object.keys(mod).find(k => k.startsWith("create"))) picks it up; this matches the pattern documented in sst/opencode provider.ts.

Tests

cd OpenWork/starter/apps/fireworks-serialized-provider
node --test __tests__/*.test.mjs

16 tests total: 12 cover the pure transform logic, 4 cover the factory via a mocked @ai-sdk/openai-compatible module loaded through a custom Node ESM loader hook.

Publishing

Releases are cut via the Publish fireworks-serialized-provider GitHub Actions workflow, triggered manually from the Actions tab with a version input. The workflow runs pnpm install, runs the test suite, verifies the package.json version matches the input, and publishes to public npm with NPM_TOKEN.