npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcp-toolbox

v1.0.0

Published

MCP (Model Context Protocol) server toolbox for [Squadbase](https://www.squadbase.dev).

Downloads

65

Readme

MCP Toolbox

MCP (Model Context Protocol) server toolbox for Squadbase.

Each connector is published as a separate package, so you only install what you need.

Packages

| Package | Description | |---------|-------------| | @squadbase/mcp | Launcher CLI | | @squadbase/mcp-databricks | Databricks connector | | @squadbase/mcp-redshift | Amazon Redshift connector | | @squadbase/mcp-athena | Amazon Athena connector | | @squadbase/mcp-airtable | Airtable connector |

Usage

Direct Execution

npx -y @squadbase/mcp-databricks
npx -y @squadbase/mcp-redshift
npx -y @squadbase/mcp-athena
npx -y @squadbase/mcp-airtable

Via Launcher

npx -y @squadbase/mcp databricks
npx -y @squadbase/mcp redshift
npx -y @squadbase/mcp athena
npx -y @squadbase/mcp airtable

MCP Client Configuration

For MCP clients like Claude Desktop, add to your config:

{
  "mcpServers": {
    "databricks": {
      "command": "npx",
      "args": ["-y", "@squadbase/mcp", "databricks"],
      "env": {
        "DATABRICKS_HOST": "your-workspace.cloud.databricks.com",
        "DATABRICKS_TOKEN": "your-token",
        "DATABRICKS_HTTP_PATH": "/sql/1.0/warehouses/abc123"
      }
    },
    "redshift": {
      "command": "npx",
      "args": ["-y", "@squadbase/mcp", "redshift"],
      "env": {
        "REDSHIFT_AWS_ACCESS_KEY_ID": "your-access-key-id",
        "REDSHIFT_AWS_SECRET_ACCESS_KEY": "your-secret-access-key",
        "REDSHIFT_AWS_REGION": "ap-northeast-1",
        "REDSHIFT_DATABASE": "dev",
        "REDSHIFT_WORKGROUP_NAME": "your-workgroup-name"
      }
    },
    "athena": {
      "command": "npx",
      "args": ["-y", "@squadbase/mcp", "athena"],
      "env": {
        "ATHENA_AWS_ACCESS_KEY_ID": "your-access-key-id",
        "ATHENA_AWS_SECRET_ACCESS_KEY": "your-secret-access-key",
        "ATHENA_AWS_REGION": "ap-northeast-1",
        "ATHENA_WORKGROUP": "primary"
      }
    },
    "airtable": {
      "command": "npx",
      "args": ["-y", "@squadbase/mcp-airtable"],
      "env": {
        "AIRTABLE_BASE_ID": "your-base-id",
        "AIRTABLE_API_KEY": "your-api-key"
      }
    }
  }
}

Environment Variables

Databricks

| Variable | Description | Required | |----------|-------------|----------| | DATABRICKS_HOST | Workspace hostname | Yes | | DATABRICKS_TOKEN | Personal access token | Yes | | DATABRICKS_HTTP_PATH | SQL warehouse HTTP path | For SQL queries |

Redshift (Data API)

Redshift uses AWS Redshift Data API for connection. Supports both provisioned clusters and serverless.

Common (Required)

| Variable | Description | |----------|-------------| | REDSHIFT_AWS_ACCESS_KEY_ID | AWS access key ID | | REDSHIFT_AWS_SECRET_ACCESS_KEY | AWS secret access key | | REDSHIFT_AWS_REGION | AWS region (e.g., ap-northeast-1) | | REDSHIFT_DATABASE | Database name |

Provisioned Cluster

| Variable | Description | Required | |----------|-------------|----------| | REDSHIFT_CLUSTER_IDENTIFIER | Cluster identifier | Yes | | REDSHIFT_SECRET_ARN | Secrets Manager ARN for authentication | No (use DB_USER if not set) | | REDSHIFT_DB_USER | Database user name | No (required if SECRET_ARN not set) |

Serverless

| Variable | Description | Required | |----------|-------------|----------| | REDSHIFT_WORKGROUP_NAME | Workgroup name | Yes | | REDSHIFT_SECRET_ARN | Secrets Manager ARN for authentication | No |

Athena

Athena uses AWS Athena API for serverless SQL queries on S3 data.

Common (Required)

| Variable | Description | |----------|-------------| | ATHENA_AWS_ACCESS_KEY_ID | AWS access key ID | | ATHENA_AWS_SECRET_ACCESS_KEY | AWS secret access key | | ATHENA_AWS_REGION | AWS region (e.g., ap-northeast-1) |

Query Execution (One Required)

| Variable | Description | |----------|-------------| | ATHENA_WORKGROUP | Athena workgroup name (recommended) | | ATHENA_OUTPUT_LOCATION | S3 location for query results (e.g., s3://bucket/path/) |

Optional

| Variable | Default | Description | |----------|---------|-------------| | ATHENA_CATALOG | AwsDataCatalog | Data catalog name | | ATHENA_DATABASE | - | Default database name | | ATHENA_POLL_INTERVAL_MS | 1000 | Polling interval in milliseconds | | ATHENA_TIMEOUT_MS | 60000 | Query timeout in milliseconds |

Airtable

| Variable | Description | Required | |----------|-------------|----------| | AIRTABLE_BASE_ID | Airtable base ID | Yes | | AIRTABLE_API_KEY | Airtable personal access token | Yes |

Available Tools

Databricks

  • ping - Health check
  • databricks_info - Connection info
  • execute_sql - Execute SQL query
  • list_catalogs - List Unity Catalog catalogs
  • list_schemas - List schemas in a catalog
  • list_tables - List tables in a schema
  • describe_table - Get table details
  • sample_table - Get sample rows

Redshift

  • ping - Health check
  • redshift_info - Connection info
  • execute_sql - Execute SQL query
  • list_databases - List databases
  • list_schemas - List schemas
  • list_tables - List tables
  • describe_table - Get table details
  • sample_table - Get sample rows
  • get_table_stats - Get table statistics

Athena

  • ping - Health check
  • athena_info - Connection info
  • execute_sql - Execute SQL query
  • list_catalogs - List data catalogs
  • list_databases - List databases in a catalog
  • list_tables - List tables in a database
  • describe_table - Get table details including partition keys
  • sample_table - Get sample rows
  • get_table_stats - Get table statistics

Airtable

  • ping - Health check
  • airtable_info - Connection info
  • list_tables - List all tables with schema information
  • describe_table - Get detailed table schema
  • get_records - Get records with filtering and pagination

Security Considerations

Arbitrary SQL Execution

The execute_sql tool allows execution of arbitrary SQL queries against your database. Ensure that:

  • Database credentials have appropriate permissions (principle of least privilege)
  • Consider using read-only database users for exploratory use cases
  • This MCP server is designed for use in trusted environments where the caller (LLM) is controlled

Identifier Escaping

All schema, table, and catalog names are properly escaped to prevent SQL injection attacks. The tools use:

  • PostgreSQL-style double-quote escaping for Redshift and Athena
  • Backtick escaping for Databricks

Development

# Install dependencies
npm install

# Build all packages
npm run build

# Test a server locally
node packages/mcp-databricks/dist/cli.js
node packages/mcp-redshift/dist/cli.js
node packages/mcp-athena/dist/cli.js
node packages/mcp-airtable/dist/cli.js

Adding a New Connector

  1. Create a new package in packages/mcp-{connector-name}/
  2. Implement the MCP server with @modelcontextprotocol/sdk
  3. Add the connector to the launcher in packages/mcp/src/index.ts

Publishing

This project uses Changesets for version management and publishing.

Understanding Changesets

Changesets tracks changes across packages in a monorepo. The workflow is:

  1. Create changeset - Record what changed and how it affects versions
  2. Version - Apply changesets to update package.json versions and CHANGELOG.md
  3. Publish - Build and publish packages to the registry

Authentication (Required before publishing)

This project publishes to npmjs.com. Before running npm run release, you must authenticate:

npm login

If you have 2FA enabled with security key only, use:

npm publish -w @squadbase/mcp-{name} --auth-type=web

Adding a New Package

When adding a new package to the monorepo:

  1. Create the package directory structure under packages/mcp-{name}/

  2. Set initial version to 0.0.0 in package.json:

    {
      "name": "@squadbase/mcp-{name}",
      "version": "0.0.0"
    }
  3. Create a changeset for the initial release:

    npm run changeset
    • Select the new package
    • Choose minor for initial release (0.0.0 -> 0.1.0) or patch for 0.0.1
    • Write a description: "Initial release of @squadbase/mcp-{name}"
  4. Apply the version:

    npm run version

    This updates package.json version and creates/updates CHANGELOG.md

  5. Commit the changes and publish:

    git add .
    git commit -m "chore: release @squadbase/mcp-{name}"
    npm run release

Updating an Existing Package

When making changes to existing packages:

  1. Make your code changes

  2. Create a changeset describing the changes:

    npm run changeset
    • Select affected package(s)
    • Choose version bump type:
      • patch (0.1.0 -> 0.1.1): Bug fixes, documentation
      • minor (0.1.0 -> 0.2.0): New features, non-breaking changes
      • major (0.1.0 -> 1.0.0): Breaking changes
    • Write a clear description of changes
  3. Commit your changes with the changeset file:

    git add .
    git commit -m "feat: add new feature to mcp-{name}"
  4. When ready to release, apply versions and publish:

    npm run version    # Updates versions and CHANGELOGs
    git add .
    git commit -m "chore: version packages"
    npm run release    # Builds and publishes

Version Bump Guidelines

| Change Type | Bump | Example | |-------------|------|---------| | Bug fix | patch | Fix SQL escaping issue | | New tool/feature | minor | Add describe_column tool | | New environment variable | minor | Add timeout configuration | | Breaking API change | major | Rename tool or change parameters | | Initial release | minor | New package at 0.1.0 |

Troubleshooting

CHANGELOG.md out of sync with package.json:

  • The package.json version is the source of truth
  • Run npm run version to regenerate CHANGELOG from pending changesets

Forgot to create changeset:

  • You can create changesets retroactively before running npm run version
  • The changeset just needs to exist before versioning

Multiple changes before release:

  • Create multiple changeset files (one per logical change)
  • They will be combined when running npm run version

License

MIT