npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@barivia/barsom-mcp

v0.7.3

Published

barSOM MCP proxy — connect any MCP client to the barSOM cloud API for Self-Organizing Map analytics

Downloads

4,350

Readme

@barivia/barsom-mcp

MCP proxy for the Barivia Analytics Engine -- connects any stdio MCP client (Cursor, Claude Desktop, etc.) to the barSOM cloud API. It implements a Progressive Disclosure architecture, exposing a guide_barsom_workflow top-level SOP tool and comprehensive jobs (train_map, train_siom_map, train_floop_chain, status/compare), results, project, and inference capabilities, following 2026 enterprise MCP best practices.

Installation

No install step needed. MCP clients run it via npx:

{
  "mcpServers": {
    "analytics-engine": {
      "command": "npx",
      "args": ["-y", "@barivia/barsom-mcp"],
      "env": {
        "BARIVIA_API_KEY": "bv_your_key",
        "BARIVIA_API_URL": "https://api.barivia.se"
      }
    }
  }
}

This is the standard pattern for MCP servers distributed as npm packages (same as firecrawl-mcp, @paypal/mcp, etc.).

Environment Variables

| Variable | Required | Default | Description | |---|---|---|---| | BARIVIA_API_KEY | Yes | -- | API key (starts with bv_) | | BARIVIA_API_URL | No | https://api.barivia.se | API base URL | | BARIVIA_WORKSPACE_ROOT | No | process.cwd() or PWD | Directory for relative file_path and save_to_disk. In Cursor MCP, process.cwd() is often the MCP install dir — add BARIVIA_WORKSPACE_ROOT to your MCP config env with your project path (e.g. /home/user/myproject). Absolute paths and file:// URIs work without it. |

Legacy BARSOM_API_KEY / BARSOM_API_URL / BARSOM_WORKSPACE_ROOT are also accepted as fallbacks.

Tools (13 + MCP App)

All multi-action tools follow the datasets pattern: a required action enum routes to the correct operation.

guide_barsom_workflow

SOP dispatch. Call this first if unsure of the workflow. No parameters.

datasets(action)

| Action | Use when | |--------|----------| | upload | Adding a new CSV — returns dataset_id | | preview | Before jobs(action=train_map), jobs(action=train_siom_map), or jobs(action=train_floop_chain) — inspect columns, stats, cyclic/datetime hints | | list | Finding dataset IDs | | subset | Creating a filtered/sliced copy (row_range, filter conditions) | | add_expression | Add a derived column from an expression (same as project(expression) without project_onto_job) | | delete | Removing a dataset |

jobs(action)

| Action | Use when | |--------|----------| | train_map | Submitting a new map training job — full control: model type, grid, epochs, cyclic/temporal features, transforms. Returns job_id; poll with jobs(action=status, job_id=...). | | train_siom_map | Submitting a self-interacting map training job — same grid-map workflow plus SIOM controls such as gamma, siom_decay, and penalty selection. | | train_floop_chain | Submitting a FLooP-SIOM training job — use when you want a growing chain or free-topology manifold instead of a fixed 2D grid. | | status | Polling after any async job — every 10–15s | | list | Finding job IDs, checking pipeline state | | compare | Picking the best run from a set (QE, TE, silhouette table) | | cancel | Stopping a running job | | delete | Permanently removing a job + its S3 files |

results(action)

| Action | Sync | Use when | |--------|------|----------| | get | instant | First look after training — combined view + quality metrics | | export | instant | Learning curve (training_log), weight matrix (weights), node stats (nodes) | | download | instant | Saving figures to a local folder | | recolor | async | Changing colormap or output format without retraining | | transition_flow | async | Temporal state transition analysis on time-ordered data |

All visualizations and metrics come from results(action=get). Grid-map jobs return combined map figures, component planes, and quality metrics; FLooP-SIOM jobs return chain-structure figures, occupation/profile views, and chain metrics. Use figures=all or export_type=... for more. There is no separate analyze tool.

project(action)

| Action | Use when | |--------|----------| | expression | Computing a derived variable from a formula (revenue / cost, diff(temp), rolling stats) — add to dataset or project onto the map | | values | Projecting a pre-computed external array (anomaly scores, labels from another system) onto the map |

inference(action)

All actions use a frozen trained map — no retraining. All are async.

| Action | Output | Timing | |--------|--------|--------| | predict | predictions.csv: per-row bmu_x/y, cluster_id, quantization_error, potential_anomaly (QE > 95th pct); summary includes qe_p95 | 5–120s | | enrich | enriched.csv: training data + bmu_x/y/node_index/cluster_id | 5–60s | | compare | density-diff heatmap + top gained/lost nodes — drift, A/B, cohort | 30–120s | | report | comprehensive PDF: metrics, views, component grid, cluster table | 30–180s |

account(action)

| Action | Use when | |--------|----------| | status | Before large jobs — plan tier, GPU availability, queue depth, credit balance, training time estimates | | request_compute | Upgrading to cloud burst. Leave tier blank to list options. | | compute_status | Checking active lease time remaining | | release_compute | Manually stopping a lease to stop billing | | history | Viewing recent compute usage and spend | | add_funds | Getting instructions to add credits |

explore_map (MCP App)

Interactive inline map explorer — clickable nodes, feature toggles, export controls.

send_feedback

Submit feedback or feature requests (max 1400 characters, ~190 words).

Tool Design Guidelines

When adding or refining tools, follow MCP best practices:

  • Single responsibility — One clear purpose per tool; avoid kitchen-sink tools
  • Specific, actionable descriptions — State purpose, constraints, and side effects; include usage guidance and follow-up steps
  • Explicit parameter descriptions — Each parameter should describe format, constraints, and when to use it
  • Bounded capability — Focused tools with specific contracts; prefer narrow, testable actions over broad ones

Data preparation

To train on a subset of your data (e.g. first 2000 rows, or rows where region=Europe) without re-uploading: use datasets(action=subset) with row_range and/or filter to create a new dataset, then train with jobs(action=train_map, dataset_id=...), jobs(action=train_siom_map, dataset_id=...), or jobs(action=train_floop_chain, dataset_id=...) on the new dataset_id; or pass row_range in the training job params for a one-off slice.

How It Works

The proxy implements the MCP stdio transport locally and translates tool calls into REST API requests to the Barivia backend. Results are returned as rich MCP content with text summaries, inline base64 images, and resource links.

MCP Client (Cursor/Claude) ←stdio→ @barivia/barsom-mcp ←HTTPS→ api.barivia.se

Development

cd apps/mcp-proxy
npm install
npm run dev      # Run with tsx (hot reload)
npm run build    # Compile to dist/

For local development against a local API stack:

BARIVIA_API_URL=http://localhost:8080 BARIVIA_API_KEY=bv_test_key npm run dev

Publishing

Published to npm via GitHub Actions on tag mcp-proxy-v*:

git tag mcp-proxy-v0.1.0
git push origin mcp-proxy-v0.1.0

Requires NPM_TOKEN secret in GitHub repository settings.

Checking local vs published

From the platform root, compare the current build to the published npm package (same version in package.json):

cd barivia-platform
bash scripts/check-mcp-proxy-publish.sh

Exit 0 = local matches published. Exit 1 = local differs (bump version and publish to update npx). Use VERBOSE=1 for a full file diff.