npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@knocklabs/agent-toolkit

v0.5.1

Published

A toolkit for working with Knock in Agent workflows.

Readme

Knock Agent Toolkit (Beta)

Table of contents

Getting started

The Knock Agent toolkit enables popular agent frameworks including OpenAI and Vercel's AI SDK to integrate with Knock's APIs using tools (otherwise known as function calling). It also allows you to integrate Knock into a Model Context Protocol (MCP) client such as Cursor, Windsurf, or Claude Code.

Using the Knock agent toolkit allows you to build powerful agent systems that are capable of sending cross-channel notifications to the humans who need to be in the loop. As a developer, it also helps you build Knock integrations and manage your Knock account.

You can read more in the documentation.

API reference

The Knock Agent Toolkit provides four main entry points:

  • @knocklabs/agent-toolkit/ai-sdk: Helpers for integrating with Vercel's AI SDK.
  • @knocklabs/agent-tookkit/langchain: Helpers for integrating with Langchain's JS SDK.
  • @knocklabs/agent-toolkit/openai: Helpers for integrating with the OpenAI SDK.
  • @knocklabs/agent-toolkit/modelcontextprotocol: Low level helpers for integrating with the Model Context Protocol (MCP).

Prerequisites

Available tools

The agent toolkit exposes a large subset of the Knock Management API and API that you might need to invoke via an agent. You can see the full list of tools in the source code.

Context

It's possible to pass additional context to the configuration of each library to help scope the calls made by the agent toolkit to Knock. The available properties to configure are:

  • environment: The slug of the Knock environment you wish to execute actions in by default, such as development.
  • userId: The user ID of the current user. When set, this will be the default passed to user tools.
  • tenantId: The ID of the current tenant. When set, will be the default passed to any tool that accepts the tenant.

Usage

Model Context Protocol (MCP)

To start using the Knock MCP as a local server, you must start it with a service token. You can run it using npx.

npx -y @knocklabs/agent-toolkit -p local-mcp --service-token kst_12345

By default, the MCP server will expose all tools to the LLM. To limit the tools available you can use the --tools (-t) flag:

// Pass all tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools="*"

// Specific category
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.*"

// Specific tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.triggerWorkflow"

If you wish to enable workflows-as-tools within the MCP server, you must set the --workflows flag to pass in a list of approved workflow keys to expose. This ensures that you keep the number of tools exposed to your MCP client to a minimum.

npx -y @knocklabs/agent-toolkit -p local-mcp --workflows comment-created activate-account

It's also possible to pass environment, userId, and tenant to the local MCP server to set default values. Use the --help flag to view additional server options.

AI SDK

The agent toolkit provides a createKnockToolkit under the /ai-sdk path for easily integrating into the AI SDK and returning tools ready for use.

  1. Install the package:
npm install @knocklabs/agent-toolkit
  1. Import the createKnockToolkit helper, configure it, and use it in your LLM calling:
import { createKnockToolkit } from "@knocklabs/agent-toolkit/ai-sdk";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { systemPrompt } from "@/lib/ai/prompts";

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const toolkit = await createKnockToolkit({
    serviceToken: "kst_12345",
    permissions: {
      workflows: { read: true, run: true, manage: true },
    },
  });

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    tools: {
      // The tools given here are determined by the `permissions`
      // list above in the configuration. For instance, here we're only
      // allowing the workflows
      ...toolkit.getAllTools(),
    },
  });

  return result.toDataStreamResponse();
}

OpenAI

The agent toolkit provides a createKnockToolkit under the /openai path for easily integrating into the Open AI SDK and returning tools ready for use.

  1. Install the package:
npm install @knocklabs/agent-toolkit
  1. Import the createKnockToolkit helper, configure it, and use it in your LLM calling:
import { createKnockToolkit } from "@knocklabs/agent-toolkit/openai";
import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const toolkit = await createKnockToolkit({
    serviceToken: "kst_12345",
    permissions: {
      // Set the permissions of the tools to expose
      workflows: { read: true, run: true, manage: true },
    },
  });

  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
    // The tools given here are determined by the `permissions`
    // list above in the configuration. For instance, here we're only
    // allowing the workflows
    tools: toolkit.getAllTools(),
  });

  // Execute the tool calls
  const toolMessages = await Promise.all(
    message.tool_calls.map((tc) => toolkit.handleToolCall(tc))
  );
}

main();

Langchain

The agent toolkit provides a createKnockToolkit under the /langchain path for easily integrating into the Lanchain JS SDK and returning tools ready for use.

  1. Install the package:
npm install @knocklabs/agent-toolkit
  1. Import the createKnockToolkit helper, configure it, and use it in your LLM calling:
import { createKnockToolkit } from "@knocklabs/agent-toolkit/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { LangChainAdapter } from "ai";

const systemPrompt = `You are a helpful assistant.`;

export const maxDuration = 30;

export async function POST(req: Request) {
  const { prompt } = await req.json();
  // Optional - get the auth context from the request
  const authContext = await auth.protect();

  // Instantiate a new Knock toolkit
  const toolkit = await createKnockToolkit({
    serviceToken: "kst_12345",
    permissions: {
      // (optional but recommended): Set the permissions of the tools to expose
      workflows: { read: true, run: true, manage: true },
    },
  });

  const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });

  const modelWithTools = model.bindTools(toolkit.getAllTools());

  const messages = [new SystemMessage(systemPrompt), new HumanMessage(prompt)];
  const aiMessage = await modelWithTools.invoke(messages);
  messages.push(aiMessage);

  for (const toolCall of aiMessage.tool_calls || []) {
    // Call the selected tool by its `name`
    const selectedTool = toolkit.getToolMap()[toolCall.name];
    const toolMessage = await selectedTool.invoke(toolCall);

    messages.push(toolMessage);
  }

  // To simplify the setup, this example uses the ai-sdk langchain adapter
  // to stream the results back to the /langchain page.
  // For more details, see: https://sdk.vercel.ai/providers/adapters/langchain
  const stream = await modelWithTools.stream(messages);
  return LangChainAdapter.toDataStreamResponse(stream);
}

Mastra

The agent toolkit provides a createKnockToolkit under the /mastra path for easily integrating into the Mastra framework and returning tools ready for use.

  1. Install the package:
npm install @knocklabs/agent-toolkit
  1. Import the createKnockToolkit helper, configure it, and use it in your LLM calling:
import { anthropic } from "@ai-sdk/anthropic";
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { createKnockToolkit } from "@knocklabs/agent-toolkit/mastra";

const toolkit = await createKnockToolkit({
  serviceToken: "knock_st_",
  permissions: {
    // (optional but recommended): Set the permissions of the tools to expose
    workflows: { read: true, run: true, manage: true },
  },
  userId: "10",
});

export const weatherAgent = new Agent({
  name: "Weather Agent",
  instructions: `You are a helpful weather assistant that provides accurate weather information.`,
  model: anthropic("claude-3-5-sonnet-20241022"),
  tools: toolkit.getAllTools(),
  memory: new Memory({
    storage: new LibSQLStore({
      url: "file:../mastra.db", // path is relative to the .mastra/output directory
    }),
  }),
});