npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

strumento

v1.2.0

Published

Partitura's built-in integrated AI model router, strumento, the tool each agent uses to play their role

Downloads

41

Readme

  • Model Routing: Route requests to different models based on your needs (e.g., gpt-5-mini for specialized tasks).
  • Multi-Provider Support: Supports various model providers like OpenRouter, Anthropic and OpenAI-compliant.
  • Request/Response Transformation: Customize requests and responses for different providers using transformers.
  • Dynamic Model Switching: Switch models on-the-fly within Claude Code's interface using the /model command.
  • CLI Model Management: Manage models and providers directly from the terminal with strumento model.
  • GitHub Actions Integration: Trigger Claude Code tasks in your GitHub workflows.
  • Plugin System: Extend functionality with custom transformers.

🚀 Getting Started

1. Installation

First, ensure you have Claude Code installed:

npm install -g @anthropic-ai/claude-code

Then, install Strumento:

npm install -g strumento-cli

2. Configuration

Create and configure your ~/.strumento/config.json file. For more details, you can refer to config.example.json.

The config.json file has several key sections:

  • PROXY_URL (optional): You can set a proxy for API requests, for example: "PROXY_URL": "http://127.0.0.1:7890".

  • LOG (optional): You can enable logging by setting it to true. When set to false, no log files will be created. Default is true.

  • LOG_LEVEL (optional): Set the logging level. Available options are: "fatal", "error", "warn", "info", "debug", "trace". Default is "debug".

  • Logging Systems: The Strumento uses two separate logging systems:

    • Server-level logs: HTTP requests, API calls, and server events are logged using pino in the ~/.strumento/logs/ directory with filenames like strumento-*.log
    • Application-level logs: Routing decisions and business logic events are logged in ~/.strumento/strumento.log
  • APIKEY (optional): You can set a secret key to authenticate requests. When set, clients must provide this key in the Authorization header (e.g., Bearer your-secret-key) or the x-api-key header. Example: "APIKEY": "your-secret-key".

  • HOST (optional): You can set the host address for the server. If APIKEY is not set, the host will be forced to 127.0.0.1 for security reasons to prevent unauthorized access. Example: "HOST": "0.0.0.0".

  • NON_INTERACTIVE_MODE (optional): When set to true, enables compatibility with non-interactive environments like GitHub Actions, Docker containers, or other CI/CD systems. This sets appropriate environment variables (CI=true, FORCE_COLOR=0, etc.) and configures stdin handling to prevent the process from hanging in automated environments. Example: "NON_INTERACTIVE_MODE": true.

  • Providers: Used to configure different model providers.

  • Router: Used to set up routing rules. default specifies the default model, which will be used for all requests if no other route is configured.

  • API_TIMEOUT_MS: Specifies the timeout for API calls in milliseconds.

Environment Variable Interpolation

Strumento supports environment variable interpolation for secure API key management. You can reference environment variables in your config.json using either $VAR_NAME or ${VAR_NAME} syntax:

{
  "OPENAI_API_KEY": "$OPENAI_API_KEY",
  "GEMINI_API_KEY": "${GEMINI_API_KEY}",
  "Providers": [
    {
      "name": "openai",
      "api_base_url": "https://api.openai.com/v1/chat/completions",
      "api_key": "$OPENAI_API_KEY",
      "models": ["gpt-5", "gpt-5-mini"]
    }
  ]
}

This allows you to keep sensitive API keys in environment variables instead of hardcoding them in configuration files. The interpolation works recursively through nested objects and arrays.

Here is a comprehensive example:

{
  "APIKEY": "your-secret-key",
  "PROXY_URL": "http://127.0.0.1:7890",
  "LOG": true,
  "API_TIMEOUT_MS": 600000,
  "NON_INTERACTIVE_MODE": false,
  "Providers": [
    {
      "name": "openrouter",
      "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
      "api_key": "sk-xxx",
      "models": [
        "google/gemini-2.5-pro-preview",
        "openai/gpt-5-mini",
        "openai/gpt-5-nano",
        "anthropic/claude-3.7-sonnet:thinking"
      ],
      "transformer": {
        "use": ["openrouter"]
      }
    },
    {
      "name": "deepseek",
      "api_base_url": "https://api.deepseek.com/chat/completions",
      "api_key": "sk-xxx",
      "models": ["deepseek-chat", "deepseek-reasoner"],
      "transformer": {
        "use": ["deepseek"],
        "deepseek-chat": {
          "use": ["tooluse"]
        }
      }
    },
    {
      "name": "ollama",
      "api_base_url": "http://localhost:11434/v1/chat/completions",
      "api_key": "ollama",
      "models": ["qwen2.5-coder:latest"]
    },
    {
      "name": "gemini",
      "api_base_url": "https://generativelanguage.googleapis.com/v1beta/models/",
      "api_key": "sk-xxx",
      "models": ["gemini-2.5-flash", "gemini-2.5-pro"],
      "transformer": {
        "use": ["gemini"]
      }
    },
    {
      "name": "volcengine",
      "api_base_url": "https://ark.cn-beijing.volces.com/api/v3/chat/completions",
      "api_key": "sk-xxx",
      "models": ["deepseek-v3-250324", "deepseek-r1-250528"],
      "transformer": {
        "use": ["deepseek"]
      }
    },
    {
      "name": "modelscope",
      "api_base_url": "https://api-inference.modelscope.cn/v1/chat/completions",
      "api_key": "",
      "models": ["Qwen/Qwen3-Coder-480B-A35B-Instruct", "Qwen/Qwen3-235B-A22B-Thinking-2507"],
      "transformer": {
        "use": [
          [
            "maxtoken",
            {
              "max_tokens": 65536
            }
          ],
          "enhancetool"
        ],
        "Qwen/Qwen3-235B-A22B-Thinking-2507": {
          "use": ["reasoning"]
        }
      }
    },
    {
      "name": "dashscope",
      "api_base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions",
      "api_key": "",
      "models": ["qwen3-coder-plus"],
      "transformer": {
        "use": [
          [
            "maxtoken",
            {
              "max_tokens": 65536
            }
          ],
          "enhancetool"
        ]
      }
    },
    {
      "name": "aihubmix",
      "api_base_url": "https://aihubmix.com/v1/chat/completions",
      "api_key": "sk-",
      "models": [
        "Z/glm-4.5",
        "claude-opus-4-20250514",
        "gemini-2.5-pro"
      ]
    }
  ],
  "Router": {
    "default": "openrouter,openai/gpt-5-mini",
    "woodwind": "openrouter,openai/gpt-5-mini",
    "percussion": "openrouter,openai/gpt-5-mini",
    "brass": "openrouter,openai/gpt-5-nano",
    "strings": "openrouter,x-ai/gpt-5-nano"
  }
}

3. Running Claude Code with the Router

Start Claude Code using the router:

strumento

Note: After modifying the configuration file, you need to restart the service for the changes to take effect:

strumento restart

5. CLI Model Management

For users who prefer terminal-based workflows, you can use the interactive CLI model selector:

strumento model

This command provides an interactive interface to:

  • View current configuration:
  • See all configured models (default, woodwind, percussion, brass, strings)
  • Switch models: Quickly change which model is used for each router type
  • Add new models: Add models to existing providers
  • Create new providers: Set up complete provider configurations including:
    • Provider name and API endpoint
    • API key
    • Available models
    • Transformer configuration with support for OpenRouter

The CLI tool validates all inputs and provides helpful prompts to guide you through the configuration process, making it easy to manage complex setups without editing JSON files manually.

Providers

The Providers array is where you define the different model providers you want to use. Each provider object requires:

  • name: A unique name for the provider.
  • api_base_url: The full API endpoint for chat completions.
  • api_key: Your API key for the provider.
  • models: A list of model names available from this provider.
  • transformer (optional): Specifies transformers to process requests and responses.

Transformers

Transformers allow you to modify the request and response payloads to ensure compatibility with different provider APIs.

  • Global Transformer: Apply a transformer to all models from a provider. In this example, the openrouter transformer is applied to all models under the openrouter provider.
    {
      "name": "openrouter",
      "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
      "api_key": "sk-xxx",
      "models": [
        "google/gemini-2.5-pro-preview",
        "openai/gpt-5-mini",
        "openai/gpt-5-nano"
      ],
      "transformer": { "use": ["openrouter"] }
    }

Available Built-in Transformers:

  • Anthropic:If you use only the Anthropic transformer, it will preserve the original request and response parameters(you can use it to connect directly to an Anthropic endpoint).
  • openrouter: Adapts requests/responses for OpenRouter API. It can also accept a provider routing parameter to specify which underlying providers OpenRouter should use. For more details, refer to the OpenRouter documentation. See an example below:
      "transformer": {
        "use": ["openrouter"],
        "moonshotai/kimi-k2": {
          "use": [
            [
              "openrouter",
              {
                "provider": {
                  "only": ["moonshotai/fp8"]
                }
              }
            ]
          ]
        }
      }

Custom Transformers:

You can also create your own transformers and load them via the transformers field in config.json.

{
  "transformers": [
    {
      "path": "/User/xxx/.strumento/plugins/gemini-cli.js",
      "options": {
        "project": "xxx"
      }
    }
  ]
}

Router

The Router object defines which model to use for different scenarios:

  • default: The default model for general tasks.

  • woodwind: A model for frontend/UI tasks. This can be a model with modern training data and highly focused on design-heavy work.

  • percussion: A model for reasoning-heavy tasks, logical structures, backend, architucture and system design.

  • brass: A model for handling bugs, debugging, checking code for errors and unintended behaviours and testing. Can be a smaller but faster model.

  • strings: Used for integrating other agents works into a unified and coherent system. Should be prioritized, following Spalla, to use more expensive but still fast models, that can handle logic and harder processes efficiently.

  • You can also switch models dynamically in the terminal tab on Partitura with the /model command: /model provider_name,model_name Example: /model openrouter,x-ai/grok-code-fast-1

Here is an example of a custom-router.js based on custom-router.example.js:

// /User/xxx/.strumento/custom-router.js

/**
 * A custom router function to determine which model to use based on the request.
 *
 * @param {object} req - The request object from Claude Code, containing the request body.
 * @param {object} config - The application's config object.
 * @returns {Promise<string|null>} - A promise that resolves to the "provider,model_name" string, or null to use the default router.
 */
module.exports = async function router(req, config) {
  const userMessage = req.body.messages.find((m) => m.role === "user")?.content;

  if (userMessage && userMessage.includes("explain this code")) {
    // Use a powerful model for code explanation
    return "openrouter,openai/gpt-5-nano";
  }

  // Fallback to the default router configuration
  return null;
};
name: Claude Code

on:
  issue_comment:
    types: [created]
  # ... other triggers

jobs:
  claude:
    if: |
      (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
      # ... other conditions
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: read
      issues: read
      id-token: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1

      - name: Prepare Environment
        run: |
          curl -fsSL https://bun.sh/install | bash
          mkdir -p $HOME/.strumento
          cat << 'EOF' > $HOME/.strumento/config.json
          {
            "log": true,
            "NON_INTERACTIVE_MODE": true,
            "OPENAI_API_KEY": "${{ secrets.OPENAI_API_KEY }}",
            "OPENAI_BASE_URL": "https://api.deepseek.com",
            "OPENAI_MODEL": "deepseek-chat"
          }
          EOF
        shell: bash

      - name: Start Strumento
        run: |
          nohup ~/.bun/bin/bunx @gabriel-feang/[email protected] start &
        shell: bash

      - name: Run Claude Code
        id: claude
        uses: anthropics/claude-code-action@beta
        env:
          ANTHROPIC_BASE_URL: http://localhost:3456
        with:
          anthropic_api_key: "any-string-is-ok"