npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@getmikk/ai-context

v1.7.1

Published

> Token-budgeted, graph-traced AI context — the difference between an LLM that guesses your architecture and one that actually knows it.

Downloads

913

Readme

@getmikk/ai-context

Token-budgeted, graph-traced AI context — the difference between an LLM that guesses your architecture and one that actually knows it.

npm License: Apache-2.0

@getmikk/ai-context solves the AI context window problem at the architectural level. Instead of dumping your entire codebase into a prompt and hoping the LLM figures it out, it walks the dependency graph from task-relevant seed functions, scores every reachable function by relevance, and packs the highest-signal functions into a token budget — giving your AI exactly what it needs and nothing it doesn't.

It also generates claude.md and AGENTS.md — tiered architecture summaries that let AI agents understand your entire project structure from a single file.

Part of Mikk — the codebase nervous system for AI-assisted development.


Installation

npm install @getmikk/ai-context
# or
bun add @getmikk/ai-context

Peer dependencies: @getmikk/core, @getmikk/intent-engine


Quick Start

Context Queries

import { ContextBuilder, getProvider } from '@getmikk/ai-context'
import { ContractReader, LockReader } from '@getmikk/core'

const contract = await new ContractReader().read('./mikk.json')
const lock = await new LockReader().read('./mikk.lock.json')

const builder = new ContextBuilder(contract, lock)
const context = builder.build({
  task: 'Add rate limiting to the authentication endpoints',
  tokenBudget: 8000,
  maxHops: 3,
  includeCallGraph: true,
})

// Format for a specific AI provider
const provider = getProvider('claude')
const formatted = provider.formatContext(context)
// Send `formatted` as part of your AI prompt

Claude.md / AGENTS.md Generation

import { ClaudeMdGenerator } from '@getmikk/ai-context'

const generator = new ClaudeMdGenerator(contract, lock, /* tokenBudget */ 4000)
const markdown = generator.generate()

// Write to project root
await writeFile('./claude.md', markdown)
await writeFile('./AGENTS.md', markdown)

How Context Building Works

The ContextBuilder uses a 6-step algorithm:

1. Resolve Seeds
   └─ Parse task → extract keywords → match against lock file functions/modules

2. BFS Proximity Walk
   └─ Walk the call graph outward from seed functions (up to maxHops)

3. Score Functions
   └─ Each function gets a relevance score:
      • Proximity score (closer to seed = higher)
      • Keyword match score (task keywords in function name)
      • Entry-point bonus (exported functions score higher)

4. Sort by Score
   └─ Descending relevance

5. Fill Token Budget
   └─ Greedily add functions until budget is exhausted
      (each function's token cost ≈ line count × 4)

6. Group by Module
   └─ Organize selected functions into module-level context

API Reference

ContextBuilder

The main entry point for building task-specific context.

import { ContextBuilder } from '@getmikk/ai-context'

const builder = new ContextBuilder(contract, lock)
const context = builder.build(query)

ContextQuery:

| Field | Type | Default | Description | |-------|------|---------|-------------| | task | string | — | Natural-language description of the task | | focusFiles | string[] | [] | Specific files to prioritize | | focusModules | string[] | [] | Specific modules to prioritize | | maxFunctions | number | 50 | Maximum functions to include | | maxHops | number | 3 | BFS depth limit from seed functions | | tokenBudget | number | 8000 | Maximum estimated tokens | | includeCallGraph | boolean | true | Include calls[] and calledBy[] per function |

AIContext (returned):

type AIContext = {
  project: string              // Project name from contract
  modules: ContextModule[]     // Relevant modules with their functions
  constraints: string[]        // Active architectural constraints
  decisions: string[]          // Relevant ADRs
  prompt: string               // Original task
  meta: {
    seedCount: number          // How many seed functions were found
    totalFunctionsConsidered: number
    selectedFunctions: number  // Functions that fit in the budget
    estimatedTokens: number    // Approximate token count
    keywords: string[]         // Extracted keywords from the task
  }
}

ContextModule:

type ContextModule = {
  id: string
  name: string
  description?: string
  intent?: string              // Module's purpose from mikk.json
  functions: ContextFunction[]
  files: string[]
}

ContextFunction:

type ContextFunction = {
  name: string
  file: string
  startLine: number
  endLine: number
  calls: string[]              // Functions this one calls
  calledBy: string[]           // Functions that call this one
  purpose?: string             // Inferred purpose
  errorHandling?: string       // Error patterns detected
  edgeCases?: string           // Edge cases noted
}

ClaudeMdGenerator

Generates tiered markdown documentation files for AI agents.

import { ClaudeMdGenerator } from '@getmikk/ai-context'

const generator = new ClaudeMdGenerator(contract, lock, tokenBudget)
const markdown = generator.generate()

Tiered output structure:

| Tier | Content | Budget | |------|---------|--------| | Tier 1 | Project summary — name, module count, total functions, architecture overview | ~500 tokens | | Tier 2 | Module details — each module's intent, public API, file list, key functions | ~300 tokens/module | | Tier 3 | Constraints & decisions — all architectural rules and ADRs | Remaining budget |

All data is sourced from the AST-derived lock file — no hallucinated descriptions. Module intents come from mikk.json, function lists and call graphs come from mikk.lock.json.

Example output:

# Project: my-app

## Architecture Overview
- **Modules:** 5
- **Total Functions:** 87
- **Total Files:** 23

## Modules

### auth
**Intent:** Handle user authentication and session management
**Public API:** `login`, `logout`, `validateToken`, `refreshSession`
**Files:** auth/login.ts, auth/session.ts, auth/middleware.ts
**Key Functions:**
- `validateToken` (auth/middleware.ts:15-42) → calls: `decodeJWT`, `checkExpiry`
- `login` (auth/login.ts:8-35) → calls: `validateCredentials`, `createSession`

### payments
...

## Constraints
- auth: no-import from payments
- payments: must-use stripe-sdk

## Decisions
- ADR-001: Use JWT for stateless auth (2024-01-15)

Providers

Providers format the AIContext object into a string optimized for a specific AI model.

import { getProvider, ClaudeProvider, GenericProvider } from '@getmikk/ai-context'

// Factory
const provider = getProvider('claude')   // or 'generic'

// Format context for the provider
const formatted = provider.formatContext(context)

ClaudeProvider

Formats with structured XML tags optimized for Anthropic Claude:

<architecture>
  <module name="auth" intent="Handle authentication">
    <function name="validateToken" file="auth/middleware.ts" lines="15-42">
      <calls>decodeJWT, checkExpiry</calls>
      <calledBy>authMiddleware</calledBy>
    </function>
  </module>
</architecture>
<constraints>
  auth: no-import from payments
</constraints>

GenericProvider

Clean plain-text format for any AI model:

## Module: auth
Intent: Handle authentication

### validateToken (auth/middleware.ts:15-42)
Calls: decodeJWT, checkExpiry
Called by: authMiddleware

Usage Examples

"What breaks if I change this file?"

const context = builder.build({
  task: `Impact analysis for changes to src/auth/login.ts`,
  focusFiles: ['src/auth/login.ts'],
  maxHops: 5,
  tokenBudget: 4000,
})
// Returns all functions transitively affected by login.ts

"Get context for adding a feature"

const context = builder.build({
  task: 'Add two-factor authentication to the login flow',
  focusModules: ['auth'],
  tokenBudget: 12000,
  includeCallGraph: true,
})
// Returns auth module functions + related cross-module calls

"Focused context for a specific module"

const context = builder.build({
  task: 'Refactor the payment processing pipeline',
  focusModules: ['payments'],
  maxFunctions: 30,
  tokenBudget: 6000,
})

Types

import type {
  AIContext,
  ContextModule,
  ContextFunction,
  ContextQuery,
  ContextProvider,
} from '@getmikk/ai-context'

License

Apache-2.0