npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

chunkr-ai-mcp

v0.1.0-alpha.14

Published

The official MCP Server for the Chunkr API

Readme

Chunkr TypeScript MCP Server

It is generated with Stainless.

Installation

Direct invocation

You can run the MCP Server directly via npx:

export CHUNKR_API_KEY="My API Key"
export CHUNKR_WEBHOOK_KEY="My Webhook Key"
npx -y chunkr-ai-mcp@latest

Via MCP Client

There is a partial list of existing clients at modelcontextprotocol.io. If you already have a client, consult their documentation to install the MCP server.

For clients with a configuration JSON, it might look something like this:

{
  "mcpServers": {
    "chunkr_ai_api": {
      "command": "npx",
      "args": ["-y", "chunkr-ai-mcp", "--client=claude", "--tools=all"],
      "env": {
        "CHUNKR_API_KEY": "My API Key",
        "CHUNKR_WEBHOOK_KEY": "My Webhook Key"
      }
    }
  }
}

Exposing endpoints to your MCP Client

There are two ways to expose endpoints as tools in the MCP server:

  1. Exposing one tool per endpoint, and filtering as necessary
  2. Exposing a set of tools to dynamically discover and invoke endpoints from the API

Filtering endpoints and tools

You can run the package on the command line to discover and filter the set of tools that are exposed by the MCP Server. This can be helpful for large APIs where including all endpoints at once is too much for your AI's context window.

You can filter by multiple aspects:

  • --tool includes a specific tool by name
  • --resource includes all tools under a specific resource, and can have wildcards, e.g. my.resource*
  • --operation includes just read (get/list) or just write operations

Dynamic tools

If you specify --tools=dynamic to the MCP server, instead of exposing one tool per endpoint in the API, it will expose the following tools:

  1. list_api_endpoints - Discovers available endpoints, with optional filtering by search query
  2. get_api_endpoint_schema - Gets detailed schema information for a specific endpoint
  3. invoke_api_endpoint - Executes any endpoint with the appropriate parameters

This allows you to have the full set of API endpoints available to your MCP Client, while not requiring that all of their schemas be loaded into context at once. Instead, the LLM will automatically use these tools together to search for, look up, and invoke endpoints dynamically. However, due to the indirect nature of the schemas, it can struggle to provide the correct properties a bit more than when tools are imported explicitly. Therefore, you can opt-in to explicit tools, the dynamic tools, or both.

See more information with --help.

All of these command-line options can be repeated, combined together, and have corresponding exclusion versions (e.g. --no-tool).

Use --list to see the list of available tools, or see below.

Specifying the MCP Client

Different clients have varying abilities to handle arbitrary tools and schemas.

You can specify the client you are using with the --client argument, and the MCP server will automatically serve tools and schemas that are more compatible with that client.

  • --client=<type>: Set all capabilities based on a known MCP client

    • Valid values: openai-agents, claude, claude-code, cursor
    • Example: --client=cursor

Additionally, if you have a client not on the above list, or the client has gotten better over time, you can manually enable or disable certain capabilities:

  • --capability=<name>: Specify individual client capabilities
    • Available capabilities:
      • top-level-unions: Enable support for top-level unions in tool schemas
      • valid-json: Enable JSON string parsing for arguments
      • refs: Enable support for $ref pointers in schemas
      • unions: Enable support for union types (anyOf) in schemas
      • formats: Enable support for format validations in schemas (e.g. date-time, email)
      • tool-name-length=N: Set maximum tool name length to N characters
    • Example: --capability=top-level-unions --capability=tool-name-length=40
    • Example: --capability=top-level-unions,tool-name-length=40

Examples

  1. Filter for read operations on cards:
--resource=cards --operation=read
  1. Exclude specific tools while including others:
--resource=cards --no-tool=create_cards
  1. Configure for Cursor client with custom max tool name length:
--client=cursor --capability=tool-name-length=40
  1. Complex filtering with multiple criteria:
--resource=cards,accounts --operation=read --tag=kyc --no-tool=create_cards

Running remotely

Launching the client with --transport=http launches the server as a remote server using Streamable HTTP transport. The --port setting can choose the port it will run on, and the --socket setting allows it to run on a Unix socket.

Authorization can be provided via the following headers: | Header | Equivalent client option | Security scheme | | ------------------ | ------------------------ | --------------- | | x-chunkr-api-key | apiKey | api_key |

A configuration JSON for this server might look like this, assuming the server is hosted at http://localhost:3000:

{
  "mcpServers": {
    "chunkr_ai_api": {
      "url": "http://localhost:3000",
      "headers": {
        "x-chunkr-api-key": "My API Key"
      }
    }
  }
}

The command-line arguments for filtering tools and specifying clients can also be used as query parameters in the URL. For example, to exclude specific tools while including others, use the URL:

http://localhost:3000?resource=cards&resource=accounts&no_tool=create_cards

Or, to configure for the Cursor client, with a custom max tool name length, use the URL:

http://localhost:3000?client=cursor&capability=tool-name-length%3D40

Importing the tools and server individually

// Import the server, generated endpoints, or the init function
import { server, endpoints, init } from "chunkr-ai-mcp/server";

// import a specific tool
import listTasks from "chunkr-ai-mcp/tools/tasks/list-tasks";

// initialize the server and all endpoints
init({ server, endpoints });

// manually start server
const transport = new StdioServerTransport();
await server.connect(transport);

// or initialize your own server with specific tools
const myServer = new McpServer(...);

// define your own endpoint
const myCustomEndpoint = {
  tool: {
    name: 'my_custom_tool',
    description: 'My custom tool',
    inputSchema: zodToJsonSchema(z.object({ a_property: z.string() })),
  },
  handler: async (client: client, args: any) => {
    return { myResponse: 'Hello world!' };
  })
};

// initialize the server with your custom endpoints
init({ server: myServer, endpoints: [listTasks, myCustomEndpoint] });

Available Tools

The following tools are available in this MCP server.

Resource tasks:

  • list_tasks (read): Lists tasks for the authenticated user with cursor-based pagination and optional filtering by date range. Supports ascending or descending sort order and optional inclusion of chunks/base64 URLs.

  • delete_tasks (write): Delete a task by its ID.

    Requirements:

    • Task must have status Succeeded or Failed
  • cancel_tasks (read): Cancel a task that hasn't started processing yet:

    • For new tasks: Status will be updated to Cancelled
    • For updating tasks: Task will revert to the previous state

    Requirements:

    • Task must have status Starting
  • get_tasks (read): Retrieves the current state of a task.

    Returns task details such as processing status, configuration, output (when available), file metadata, and timestamps.

    Typical uses:

    • Poll a task during processing
    • Retrieve the final output once processing is complete
    • Access task metadata and configuration

Resource tasks.extract:

  • create_tasks_extract (write): Queues a document/parsed task for extraction and returns a TaskResponse with the assigned task_id, initial configuration, file metadata, and timestamps. The initial status is Starting.

    Creates an extract task and returns its metadata immediately.

  • get_tasks_extract (read): Retrieves the current state of an extract task.

    Returns task details such as processing status, configuration, output (when available), file metadata, and timestamps.

    Typical uses:

    • Poll a task during processing
    • Retrieve the final output once processing is complete
    • Access task metadata and configuration

Resource tasks.parse:

  • create_tasks_parse (write): Queues a document for processing and returns a TaskResponse with the assigned task_id, initial configuration, file metadata, and timestamps. The initial status is Starting.

    Creates a parse task and returns its metadata immediately.

  • get_tasks_parse (read): Retrieves the current state of a parse task.

    Returns task details such as processing status, configuration, output (when available), file metadata, and timestamps.

    Typical uses:

    • Poll a task during processing
    • Retrieve the final output once processing is complete
    • Access task metadata and configuration

Resource files:

  • create_files (write): Accepts multipart/form-data with fields:

    • file: binary (required)
    • file_metadata: string (optional, JSON string)
  • list_files (read): Lists files for the authenticated user with cursor-based pagination and optional filtering by date range.

  • delete_files (write): Delete file contents and scrub sensitive metadata. Minimal metadata is retained for audit and usage reporting per ZDR policy

  • content_files (read): Streams the file bytes directly if authorized. The response will set the Content-Type header to the file's detected MIME type.

  • get_files (read): Returns metadata for a file owned by the authenticated user. The response includes a permanent ch://files/{file_id} URL, file name, content type, size, user-provided metadata, and timestamps.

    If the file is not found or the user is not authorized, the response will be 401 Unauthorized.

  • url_files (read): Returns a presigned download URL by default. If base64_urls=true, returns base64-encoded file content. Control expiry with expires_in (seconds).

Resource health:

  • check_health (read): Confirmation that the service can respond to requests

Resource webhooks:

  • url_webhooks (read): Get or create webhook for user and return dashboard URL

Resource file-types:

  • get_file_types (read): Returns a list of all file types supported by Chunkr, grouped by category. Each category contains a list of formats, where each format includes an extension paired with its corresponding MIME type.