npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@archships/dim-agent-sdk

v0.0.36

Published

An agent-first TypeScript SDK with provider adapters, sessions, hooks, plugins, and runtime gateways.

Readme

@archships/dim-agent-sdk

An agent-first TypeScript SDK with canonical multi-provider contracts, session/tool loop, hook-based plugins, runtime gateways, and builtin local coding tools.

Install

Recommended runtime: Node.js >=18.

npm install @archships/dim-agent-sdk

Quick start

import { createAgent, createModel, createOpenAIAdapter } from '@archships/dim-agent-sdk'

const model = createModel(
  createOpenAIAdapter({
    apiKey: process.env.OPENAI_API_KEY,
    baseUrl: 'https://api.openai.com/v1',
    defaultModel: 'gpt-4o-mini',
  }),
)

const agent = createAgent({
  model,
  cwd: process.cwd(),
})

const session = await agent.createSession({
  systemPrompt: 'You are a coding agent. Use tools when needed.',
})

const itemId = session.send(
  'Create hello.txt in the current directory and write hello world into it.',
)

for await (const event of session.receive()) {
  if (event.itemId !== itemId) continue
  if (event.type === 'text_delta') process.stdout.write(event.delta)
  if (event.type === 'done') console.log(event.message.content)
}

When a session has tools available, the runtime also injects an internal tool-call JSON contract into the effective system prompt. Models are told to emit one complete JSON object per tool call. If a streamed tool call still arrives with malformed pre-execution JSON, the SDK now surfaces it as a failed recoverable tool_result, stores that failed result in session history when the tool name is known, and keeps the run alive instead of failing the whole item immediately.

Provider debug logging

Builtin provider adapters support optional adapter-level request logging. Pass debug.logFilePath when constructing the adapter to write JSONL records for normalized model requests, AI SDK call options, final HTTP requests, and normalized provider errors. This feature is off by default, and secret-bearing fields such as authorization / apiKey are redacted unless you explicitly set redactSecrets: false.

import path from 'node:path'
import { createModel, createOpenAIAdapter } from '@archships/dim-agent-sdk'

const model = createModel(
  createOpenAIAdapter({
    apiKey: process.env.OPENAI_API_KEY,
    baseUrl: 'https://api.openai.com/v1',
    defaultModel: 'gpt-4o-mini',
    debug: {
      logFilePath: path.join(process.cwd(), 'logs/provider-debug.jsonl'),
    },
  }),
)

Manual

Included core capabilities

  • Canonical content / message / tool / model / state contracts
  • Canonical message content aligned to AI SDK V3 prompt parts: text and file
  • createAgent() -> Agent -> Session
  • Queue-first session flow: send(), sendBatch(), steer(), receive(), getQueueStatus()
  • Long-lived host hygiene: Session.dispose() releases session-scoped runtime registrations and controller-owned resources when a session is no longer needed
  • Session events: text_delta, optional thinking_delta, tool_call_start, tool_call_args_delta, tool_call_end, tool_call, plugin_event, recoverable and regular tool_result, done, error
  • Provider adapters: openai-compatible, openai-responses, anthropic, gemini, zenmux, aihubmix, aihubmix-responses, moonshotai, deepseek, xai, xai-responses
  • Builtin tools: read, write, edit, exec
  • Hook-first plugin integration
  • Host-layer ordered subagents with SubagentOrchestrator, InProcessSessionSubagentExecutor, ProcessSessionSubagentExecutor, and SubagentProcessRegistry
  • Runtime gateways: file system, git, exec, network, model
  • Namespaced plugin session state with snapshot restore support
  • In-memory and file-based persistence

Hook support

Supported public hooks in the current runtime:

  • run.start
  • tool.beforeExecute
  • tool.afterExecute
  • context.compact.before
  • notify.message
  • run.stop
  • run.end
  • session.error

Reserved / experimental hook names that are typed but not wired into the runtime yet:

  • subagent.stop

Current failure policy:

  • Sync middleware is blocking and fail-fast
  • Observers are best-effort
  • mode: 'async' is observer-only
  • timeoutMs applies per hook handler

Official plugin packages

| Package | Support level | Notes | | ------------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------- | | @archships/dim-plugin-auto-compact | supported | Official auto compaction plugin; requires compaction.ownerPluginId: 'auto-compact' | | @archships/dim-plugin-grep-glob | supported | Registers grep and glob filesystem tools | | @archships/dim-plugin-mcp-client | supported | Controller-driven MCP client for session-scoped server connections and tool injection; use @archships/dim-agent-sdk >= 0.0.23 and @archships/dim-plugin-api >= 0.0.9; host-only prompt/context/tool injection remains compatible | | @archships/dim-plugin-skills | supported | Metadata-first file-backed skills plugin with always-on catalog metadata, model-callable activation tools, and a session controller | | @archships/dim-plugin-plan-mode | supported | Session-scoped planning guardrail with host-data-backed drafts, restricted exec, and plan_read / plan_write; use @archships/dim-agent-sdk >= 0.0.23 and @archships/dim-plugin-api >= 0.0.9 | | @archships/dim-plugin-memory | experimental | Placeholder package; not part of the current supported surface yet | | @archships/dim-plugin-web | experimental | Placeholder package; not part of the current supported surface yet | | @archships/dim-plugin-scheduler | experimental | Placeholder package; not part of the current supported surface yet | | @archships/dim-plugin-research-mode | experimental | Evidence-first research guardrail with a session controller, staged prompt injection, research state, and delegate_tasks guidance |

Compaction and state model

  • session.messages always keeps the full original history for UI and restore
  • systemPrompt is text-only; user messages can carry mixed text + file content
  • Canonical request projection is controlled by SDK compaction state: cursor, systemSegments, checkpoints
  • Session.getStatus() returns the canonical read-only session status snapshot used by hook runtime context
  • Session.getPlugin(pluginId) returns a session-scoped plugin controller when the plugin exposes one
  • Session.dispose() unregisters the session from compaction and plugin-state services, and also disposes session-scoped plugin controllers such as mcp-client
  • Plugins can persist their own namespaced session state through pluginState
  • If compaction.ownerPluginId is configured, only that plugin can write canonical compaction through plugin services
  • Session.compact() remains available as an app-level override
  • estimateByHeuristic({ messages, tools, model }) is exported for hosts and plugins that want the same default request-budget estimate the SDK uses when compaction.estimator is unset; file blocks use a fixed token budget instead of raw base64 length
  • If threshold compaction still cannot fit the next request, the runtime can do one bounded recovery pass that rewrites only the newest oversized result payloads (tool outputs and subagent parent-commit packages) into short overflow summaries before finally raising context_compaction_required
  • Hook handlers receive the same canonical status through context.status, without exposing full message history or other plugins' state

Provider notes

  • createOpenAIAdapter(): OpenAI-compatible Chat Completions style; maps reasoning_content / reasoning into thinking_delta and replays assistant thinking back upstream
  • createOpenAIResponsesAdapter(): official OpenAI Responses API; maps reasoning summaries into thinking_delta, sends full canonical history by default, and only reuses previousResponseId when usePreviousResponseId: true
  • createAnthropicAdapter(): maps Claude thinking blocks into thinking_delta; when overriding baseUrl, the adapter appends /v1 if it is missing; Anthropic-compatible routes now auto-inject explicit prompt caching unless cache.mode is set to 'off'
  • createGeminiAdapter(): maps Gemini thought parts into thinking_delta
  • createZenMuxAdapter(): ZenMux adapter; routes anthropic/* models through https://zenmux.ai/api/anthropic/v1/messages and sends every other model through the official ZenMux OpenAI-compatible endpoint at https://zenmux.ai/api/v1/chat/completions. baseUrl is kept for compatibility but ignored at runtime.
  • createAihubmixAdapter() / createAihubmixResponsesAdapter(): AIHubMix chat + responses adapters; the responses variant now sends full canonical history by default and only reuses previousResponseId when usePreviousResponseId: true. If you omit apiKey, you must inject the real upstream authentication inside custom fetch.
  • createMoonshotAIAdapter(): MoonshotAI language model adapter with thinking budget mapping
  • createDeepSeekAdapter(): DeepSeek chat adapter with reasoning -> thinking_delta
  • createXaiAdapter() / createXaiResponsesAdapter(): xAI chat + responses adapters; the responses variant now sends full canonical history by default and only reuses previousResponseId when usePreviousResponseId: true
  • Official adapters use realtime upstream streaming by default. Set adapter streamMode: 'buffered' or request streamMode: 'buffered' to replay a completed response instead.
  • Session runtime fills missing maxOutputTokens with 4000 after model.request middleware runs. Direct model.stream() calls still pass through whatever the caller provides.
  • The SDK no longer injects implicit workspace context into model requests; if the model needs file or Git state, let it call tools explicitly.
  • Builtin providers are also available from public subpaths such as @archships/dim-agent-sdk/providers/openai and @archships/dim-agent-sdk/providers/xai-responses
  • Published packages ship these public entrypoints directly; consumers should not import dist/src/* or patch node_modules after install
  • Other deep internal imports are not part of the public API, even though package-local tests use path aliases against src/*
  • Custom providers can follow the same factory pattern via @archships/dim-agent-sdk/providers/core; implement stream() for realtime deltas, generate() for buffered results, or both

Anthropic-compatible prompt caching defaults:

  • cache.mode defaults to 'auto'
  • cache.ttl defaults to '5m'
  • cache: { mode: 'off' } disables SDK-managed cache_control injection
  • explicit providerOptions.anthropic.cacheControl on a message, content block, or tool definition is forwarded unchanged and disables auto injection for that request
import { createProviderFactory } from '@archships/dim-agent-sdk/providers/core'

Demo

Provider-free demos:

  • pnpm run demo:host: approval + notification host control-plane walkthrough
  • pnpm run demo:persistence: FileStateStore + session.save() + restoreSession() walkthrough
  • pnpm run demo:plan-mode: official session-scoped plugin controller + hostDataDir walkthrough
  • pnpm run demo:compaction: scripted compaction runtime walkthrough
  • pnpm run demo:auto-compact: scripted official auto compact plugin demo
  • pnpm run demo:subagents: scripted ordered subagent walkthrough for in-process and child-owned process modes

Provider-backed demos:

  • pnpm run demo:openai: builtin tools smoke demo
  • pnpm run demo:hooks: Hook v2 scenario runner

Repo demo files:

  • packages/dim-agent-sdk/demo/host-control-plane-scripted.ts
  • packages/dim-agent-sdk/demo/persistence-scripted.ts
  • packages/dim-agent-sdk/demo/plan-mode-plugin.ts
  • packages/dim-agent-sdk/demo/compaction-scripted.ts
  • packages/dim-agent-sdk/demo/auto-compact-plugin.ts
  • packages/dim-agent-sdk/demo/subagents-scripted.ts
  • packages/dim-agent-sdk/demo/openai-tools.ts
  • packages/dim-agent-sdk/demo/openai-hooks.ts

demo:hooks currently runs these provider-backed scenarios:

  • lifecycle
  • approval-deny
  • synthetic-result
  • notification-control
  • stop-finalize

Provider-backed demos use createOpenAIAdapter() against the OpenAI-compatible Chat API and require DIM_TEST_API_KEY, DIM_TEST_BASE_URL, and optional DIM_TEST_MODEL_ID.

Plan mode v2 notes:

  • plan drafts live under <hostDataDir>/plans/<sessionId>/plan.md
  • host applications drive plan mode through session.getPlugin('plan-mode')
  • plan_write is the only writable artifact exposed to the model while plan mode is active
  • enable() / disable() changes affect the next run, not the current in-flight run
  • agent.deleteSession(sessionId) removes both the persisted snapshot and the matching plan draft directory

Testing

Local repository verification is split into three layers:

  • Local workspace development expects Node.js 20.19+ or 22.12+ because repository verification uses official Vite 7 + Vitest + Oxlint.
  • pnpm run test: full local regression, including deterministic test/e2e/*.e2e.test.ts
  • pnpm run test:e2e: deterministic end-to-end workflows for plan-mode, code-agent tool loops, auto-compact restore, and approval / permission boundaries
  • pnpm run test:plugins: focused plugin contract and integration tests
  • pnpm run test:smoke: env-gated provider smoke for provider-tools, provider-hooks, provider-plan-mode, and provider-subagent-process; excludes long-task
  • pnpm run test:smoke:providers: focused multi-provider builtin-tool smoke in test/smoke/provider-tools.smoke.ts
  • pnpm run test:smoke:long-task: default manual OpenAI-compatible long-task smoke; it currently aliases the guides case without subagents
  • pnpm run test:smoke:long-task:subagents: the same default guides case with host-owned delegation plus child-owned process subagents enabled through DIM_TEST_LONG_TASK_SUBAGENTS=1
  • pnpm run test:smoke:long-task:guides / pnpm run test:smoke:long-task:guides:subagents: explicit beginner-guide generation case for modelinfo-cli
  • pnpm run test:smoke:long-task:rust-port / pnpm run test:smoke:long-task:rust-port:subagents: review-oriented Rust port case for the same reference repo

The provider smoke layer reuses:

  • DIM_TEST_API_KEY
  • DIM_TEST_BASE_URL
  • DIM_TEST_BASE_URL_ANTHROPIC (optional; enables the Anthropic smoke lane)
  • DIM_TEST_MODEL_ID (optional)
  • DIM_TEST_LONG_TASK_CASE (optional; guides by default, also supports rust-port)
  • DIM_TEST_LONG_TASK_SUBAGENTS (optional; only 1 enables the subagent mode for the long-task smoke)

The focused multi-provider tools smoke loads repo-root .env.smoke automatically. Start from .env.smoke.example, fill only the providers you want to run, and use per-provider keys such as DIM_TEST_OPENAI_API_KEY, DIM_TEST_OPENAI_MODEL_ID, and optional DIM_TEST_OPENAI_BASE_URL. ZenMux is the exception: DIM_TEST_ZENMUX_BASE_URL is ignored because createZenMuxAdapter() now pins the official ZenMux endpoints, and you can optionally add DIM_TEST_ZENMUX_ANTHROPIC_MODEL_ID to create a second ZenMux smoke lane that exercises the Anthropic-compatible route. That smoke keeps its temporary workdirs under os.tmpdir()/dim-sdk-provider-smoke/<provider-id>/run-* and removes each run directory after the test finishes.

Smoke tests assert stable invariants such as tool calls, plugin_event, notifications, and on-disk side effects. They intentionally avoid exact natural-language output matching unless the smoke is explicitly about artifact structure. The long-task smoke is tuned for diagnosis as much as pass/fail: it runs only through the OpenAI-compatible adapter, reuses a generic harness plus named cases, and preserves both the final workspace and debug logs for inspection. The default guides case generates beginner-friendly Markdown guides for every tracked file; the rust-port case asks the model to translate the same reference repo into a reviewable Rust cargo project. Every run now prints and writes a unified summary with host rounds, model turns, tool-call count, total tokens, cache/token detail breakdowns when the provider exposes them, delegation/process-batch counts, and case-specific stats. It also writes a full-fidelity session timeline array to logs/long-task-session.json, alongside the staged debug logs in logs/long-task-smoke.jsonl, so parent and delegated child context can be replayed during debugging. Those staged diagnostics now include explicit compaction_notification and compaction_state_after_hook entries, which makes it much easier to see whether auto compact triggered and why it still failed to bring the request back under budget. Ordered subagents remain opt-in through pnpm run test:smoke:long-task:subagents or DIM_TEST_LONG_TASK_SUBAGENTS=1.