npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mason-context

v0.3.6

Published

Context engineering CLI & MCP server — generates intelligent CLAUDE.md files through structured codebase analysis

Readme

Mason – the context builder for LLMs 👷

npm version CI npm downloads license issues

Mason gives LLMs a persistent map of your codebase so they stop exploring from scratch every session.

The problem: Every time an LLM starts a new conversation about your code, it greps, reads files, and pieces together the architecture — burning tokens on context it already understood yesterday. On a 164-file project, answering "what features does this app have?" requires reading 8+ files across multiple tool calls.

Mason's fix: A concept map that persists across sessions. One tool call returns a feature-to-file lookup table — the LLM knows exactly where to look, without exploring.

Measured result (deepeval, Claude Sonnet, 164-file KMP project):

| Question | With Mason | Without Mason | Token saving | |---|---|---|---| | List all features | 10,258 tok | 31,346 tok | 67% | | Trace data flow | 12,010 tok | 15,258 tok | 21% | | Compare platforms | 10,897 tok | 19,353 tok | 44% | | Onboarding flow | 10,271 tok | 11,432 tok | 10% | | Average | | | 36% |

Same answer quality (0.9/1.0 on all tests, both paths). Reproduce: bench/.

Quick start

claude mcp add mason --scope user -- npx -p mason-context mason-mcp

Restart Claude Code, then ask: "use mason to analyze this project and create a snapshot."

That's it — Mason will analyze your codebase and create a concept map. Next session, it loads the map instead of re-exploring everything.

How it works

Concept map

Mason's core feature. It persists a feature-to-file map in .mason/snapshot.json that survives across conversations. When the LLM needs to understand your project, it reads this map instead of grepping through your entire codebase:

{
  "features": {
    "home screen": {
      "files": ["HomeScreen.kt", "HomeViewModel.kt", "GetWeatherDataUseCase.kt"]
    }
  },
  "flows": {
    "weather fetch": {
      "chain": ["HomeViewModel.kt", "WeatherRepositoryImpl.kt", "WeatherServiceImpl.kt"]
    }
  }
}

The map is generated by the LLM itself — Mason provides the analysis tools, and the LLM interprets your code to decide what the features and flows are. This means the map captures architectural understanding, not just file listings.

Create one by asking your AI assistant to "create a mason snapshot", or via CLI:

mason set-llm gemini          # configure a provider (no API key needed)
mason snapshot ~/my-project   # generate concept map
mason snapshot --install-hook # auto-update on every commit

Change impact analysis

Before editing a file, Mason can tell you what else might be affected. It combines three signals that would each require multiple tool calls to gather manually:

  • Co-change history — files that historically change together in git commits
  • References — files that import or mention the target by name
  • Related tests — test files paired to the target by naming convention
mason impact WeatherRepository.kt -d ~/my-project

Also available as the get_impact MCP tool — ask your assistant "what would be affected if I changed WeatherRepository?"

Git history analysis

Mason aggregates hundreds of commits into actionable stats: which files change most often (hot files you should be careful with), which directories haven't been touched in months (potentially stale code), and what commit conventions the team follows. This is the kind of analysis that would take dozens of git log calls to compute manually.

mason analyze ~/my-project

MCP tools

Mason exposes 6 tools via the Model Context Protocol. Any MCP-compatible client (Claude Code, Cursor, etc.) can use them:

| Tool | What it does | |---|---| | get_snapshot | Load the concept map — maps features/flows to files | | save_snapshot | Persist the concept map for future sessions | | get_impact | Change impact: co-change history, references, related tests | | analyze_project | Git history: commit patterns, hot files, stale dirs | | full_analysis | All-in-one first visit: git stats + structure + code samples + test map | | get_code_samples | Smart file previews selected by architectural role |

CLI usage

Mason also works as a standalone CLI for generating CLAUDE.md files and running analysis without an MCP client. Configure an LLM provider once, then use any command:

mason set-llm claude|gemini|ollama|openai  # configure provider
mason generate                # analyze codebase + LLM -> CLAUDE.md
mason analyze                 # git stats only (no LLM needed)
mason impact File.kt          # change impact analysis
mason snapshot                # create/update concept map

Most providers work without an API key — claude, gemini, and ollama all use their respective CLIs directly.

Security

What the snapshot contains: Feature names, relative file paths, and flow descriptions. No source code, secrets, or business logic.

What it doesn't touch: Mason respects .gitignore (via git ls-files) and has a deny-list that blocks .env, .pem, .key, credentials, and other sensitive files from being sampled. Path traversal protection ensures all file access stays within the project root.

LLM data flow: Generating a snapshot via CLI sends sampled file contents to your configured LLM provider — the same way any AI coding assistant reads your code. Use ollama for fully local generation. The MCP server tools (get_snapshot, get_impact, etc.) only read local files.

Language support

Mason is completely language-agnostic. It uses file naming patterns and git history rather than language-specific parsing, so it works with any project that has source files and a git repository — TypeScript, Kotlin, Python, Go, Rust, Swift, Java, C#, Dart, and more.

License

MIT