npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@augment-adk/augment-adk

v0.1.12

Published

Agent Development Kit for multi-agent orchestration over the Responses API via LlamaStack

Downloads

1,442

Readme

Augment ADK

CI

A lightweight TypeScript SDK for building multi-agent workflows over the Responses API via LlamaStack. Inspired by the OpenAI Agents JS SDK, zero external runtime dependencies.

Core concepts

  1. Agents: LLMs configured with instructions, tools, guardrails, and handoffs
  2. Handoffs: Delegating to other agents via typed agent graphs with validation
  3. Tools: Function tools, MCP tool integration, and hosted tool factories
  4. Guardrails: Configurable safety checks for input and output validation
  5. Human in the loop: Built-in approval store with MCP approval flow
  6. Sessions: Conversation history management across agent runs
  7. Tracing: Built-in tracking of agent runs for debugging and optimization
  8. Streaming: SSE normalization for real-time token streaming

Explore the examples/ directory to see the SDK in action.

Get started

Supported environments

  • Node.js 18 or later
  • Any TypeScript runtime (Deno, Bun)

Installation

npm install @augment-adk/augment-adk

Run your first agent

import { run, LlamaStackModel } from '@augment-adk/augment-adk';

const model = new LlamaStackModel({
  clientConfig: { baseUrl: 'http://localhost:8321' },
});

const result = await run('What is the capital of France?', {
  model,
  agents: {
    assistant: {
      name: 'Assistant',
      instructions: 'You are a helpful assistant.',
    },
  },
  defaultAgent: 'assistant',
  config: {
    model: 'meta-llama/Llama-3.1-8B-Instruct',
    baseUrl: 'http://localhost:8321',
    systemPrompt: 'You are a helpful assistant.',
    enableWebSearch: false,
    enableCodeInterpreter: false,
    vectorStoreIds: [],
    vectorStoreName: '',
    embeddingModel: '',
    embeddingDimension: 384,
    chunkingStrategy: 'auto',
    maxChunkSizeTokens: 800,
    chunkOverlapTokens: 400,
    skipTlsVerify: true,
    zdrMode: false,
    verboseStreamLogging: false,
  },
});

console.log(result.content);

Optional: Chat Completions for local development

For local testing with Ollama, vLLM, or other Chat Completions providers, install the optional adapter:

npm install @augment-adk/adk-chat-completions
import { run } from '@augment-adk/augment-adk';
import { ChatCompletionsModel } from '@augment-adk/adk-chat-completions';

const model = new ChatCompletionsModel({
  clientConfig: {
    baseUrl: 'http://localhost:11434',
    token: process.env.API_KEY,
  },
});

See the chat-completions example for a complete walkthrough.

Examples

| Example | Description | |---------|-------------| | basic | Single-agent question answering | | chat-completions | Chat Completions backend via optional @augment-adk/adk-chat-completions | | multi-agent | Router + specialist agent graph with handoffs | | mcp-tools | Function tools and hosted MCP tool integration | | human-in-the-loop | Approval workflows for destructive operations | | backstage-plugin | Integrating ADK into a Backstage backend plugin |

Packages

The SDK is organized as a monorepo with focused packages:

| Package | Description | |---------|-------------| | @augment-adk/augment-adk | Batteries-included entry point (core + LlamaStack) | | @augment-adk/adk-core | Provider-agnostic core: agents, runner, tools, guardrails, approval, streaming, tracing | | @augment-adk/adk-llamastack | LlamaStack Responses API model provider | | @augment-adk/adk-chat-completions | Chat Completions adapter (optional, separate install) |

Most users should install @augment-adk/augment-adk. Advanced consumers can import individual packages for lighter bundles:

import { run, Agent } from '@augment-adk/adk-core';
import { LlamaStackModel } from '@augment-adk/adk-llamastack';

Architecture

See ARCHITECTURE.md for design decisions, extension points, and how the run loop works. Start there if you want to add a new model provider, integrate with a different framework, or understand how the codebase is structured.

Development

# Install dependencies
pnpm install

# Build all packages (in dependency order)
pnpm -r build

# Run all tests
pnpm -r test

# Type-check all packages
pnpm -r typecheck

# Lint
pnpm -r lint

Acknowledgements

We'd like to acknowledge the excellent work of the open-source community, especially:

License

Apache-2.0 — see LICENSE.