hayagriva-llm
v6.0.0
Published
Structured LLM metadata standard for Node.js packages — llm.package.json and llm.package.txt
Maintainers
Readme
hayagriva-llm
Structured LLM metadata for Node.js packages — the first standard for machine-readable package context in the npm ecosystem. Generates llm.package.json and llm.package.txt for indexing, search, and IDE tooling (e.g. Cursor, Antigravity).
📖 Documentation: https://deepwiki.com/prakhardubey2002/hayagriva-llm
Install
npm install -g hayagriva-llm
# or
npx hayagriva-llm generateRequirements: Node.js 18+
Usage
From your package root:
hayagriva-llm generate [options]| Option | Description | Default |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------- |
| --mode <type> | static (ts-morph) or ai (OpenRouter) | static |
| --api-key <key> | OpenRouter API key (required for --mode ai) | OPEN_ROUTER_API_KEY env |
| --model <name> | OpenRouter model (AI mode) | openai/gpt-4o-mini or OPEN_ROUTER_MODEL |
| --include-src | Include full entry source in AI prompt | off |
| --verbose | Debug logging | off |
| --freellmrouter | Use Free LLM Router for ranked free OpenRouter models (FREE_LLM_ROUTER_API_KEY; implies AI mode) | off |
| --rule | Also generate a Cursor rule .mdc in .cursor/rules/ | off |
Agent operating manual (AGENT.md)
Generate a thorough AGENT.md file (an operating manual for coding agents):
hayagriva-llm agent [options]| Option | Description | Default |
| -------------- | ------------------------- | ---------- |
| --out <file> | Output filename | AGENT.md |
| --force | Overwrite existing output | off |
AI Readiness Audit
Scan any JS/TS package and get an AI Readiness Score (0–100):
hayagriva audit
# or
hayagriva-llm audit| Option | Description | Default |
| -------------- | ------------------------------------ | ------- |
| --dir <path> | Directory to audit (defaults to cwd) | . |
The audit checks for:
| Check | Weight |
| -------------------------------- | ------ |
| README.md exists | 10 |
| TypeScript declarations | 10 |
| JSDoc comments | 10 |
| Examples folder | 10 |
| exports field in package.json | 10 |
| llm.package.json | 25 |
| Prompt templates | 10 |
| Security / safety metadata | 15 |
Example output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ Hayagriva AI Audit
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AI Readiness Score EXCELLENT
████████████████████████████░░ 93/100
──────────────────────────────────────────────────────
Passed
──────────────────────────────────────────────────────
✓ README exists (+10)
✓ TypeScript declarations (+10)
✓ JSDoc comments present (+10)
✓ Package exports defined (+10)
✓ llm.package.json found (+25)
──────────────────────────────────────────────────────
Missing
──────────────────────────────────────────────────────
✗ Structured examples (-10)
──────────────────────────────────────────────────────
Recommendations
──────────────────────────────────────────────────────
→ Create an examples/ folder with usage snippets
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Examples:
# Static mode (no API key): extract exports from TypeScript/JavaScript entry
hayagriva-llm generate
# AI mode: richer metadata (summary, side effects, keywords) via OpenRouter
hayagriva-llm generate --mode ai
# AI mode with Free LLM Router: live free-model list, in-process cache (~15m), per-step fallback on OpenRouter
hayagriva-llm generate --freellmrouter
# AI with custom model and full source context
hayagriva-llm generate --mode ai --model openai/gpt-4o --include-src --verbose
# Also generate a Cursor rule file (.cursor/rules/<package-name>.mdc)
hayagriva-llm generate --rule
hayagriva-llm generate --mode ai --rule
# Generate AGENT.md (manual for coding agents)
hayagriva-llm agent
# Write to a custom filename and overwrite if it already exists
hayagriva-llm agent --out Agent.md --force
# Audit the current project for AI readiness
hayagriva audit
# Audit a specific directory
hayagriva audit --dir /path/to/projectEnvironment
| Variable | Description |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OPEN_ROUTER_API_KEY | OpenRouter API key (required for AI mode; used for all chat completions) |
| OPEN_ROUTER_MODEL | Default model for AI mode (e.g. openai/gpt-4o-mini; not used for model pick when --freellmrouter is set) |
| FREE_LLM_ROUTER_API_KEY | Free LLM Router key (required with --freellmrouter only; get it from the API tab in the dashboard) |
Copy .env.example to .env and set OPEN_ROUTER_API_KEY (and optionally OPEN_ROUTER_MODEL) for AI mode. Legacy names OPENROUTER_API_KEY and HAYAGRIVA_LLM_MODEL are still supported.
With --freellmrouter, hayagriva-llm calls OpenRouter using your OpenRouter key while the router supplies an ordered list of free model IDs. The list is cached in memory for about 15 minutes; if the router is unreachable, a stale cached list is reused when available (see Free LLM Router docs).
Dedicated OpenRouter key (recommended for --freellmrouter)
Free LLM Router recommends a separate OpenRouter API key used only for free models, plus a low credit limit (e.g. $1) on OpenRouter so an accidental paid model does not charge your account.
- Open OpenRouter → Keys (or your workspace keys page, e.g. default workspace keys).
- Create a new key (do not reuse a production key).
- In OpenRouter billing/settings, set a credit limit appropriate for experimentation.
- Put that key in
OPEN_ROUTER_API_KEY, and setFREE_LLM_ROUTER_API_KEYto your Free LLM Router key from the dashboard → API tab (docs).
Output
llm.package.json— Structured metadata: name, version, description,exports,hooks,frameworks, optionalsummary,sideEffects,keywords; IDE- and search-friendly.llm.package.txt— LLM-optimized plain-text summary for context windows and retrieval.
Observability (local)
Every run writes analytics to a local hidden folder in the package you run from:
.hayagriva-llm/runs.jsonl: append-only history (one JSON per run).hayagriva-llm/last-run.json: the most recent run (pretty JSON)
This folder is meant to be local-only (it’s ignored by git).
To view a local dashboard:
# from your package root (where .hayagriva-llm/ exists)
npx hayagriva-llm dashboard
# or
hayagriva-llm dashboard --port 4177Flow (high level)
flowchart TB
subgraph CLI
A[hayagriva-llm generate] --> B[Load package.json]
B --> C[Detect entry file]
C --> D{Mode?}
end
subgraph Static["Static mode"]
D -->|static| E[ts-morph: extract exports, JSDoc, hooks]
E --> F[Build metadata]
end
subgraph AI["AI mode (guardrails)"]
D -->|ai| G[Step 1: Package overview]
G --> H[Validate: summary, sideEffects, keywords, frameworks]
H --> I[Step 2: Exports]
I --> J[Validate: exports map, hooks]
J --> K[Merge steps]
K --> F
end
F --> L[Write llm.package.json]
F --> M[Write llm.package.txt]Detailed flow (entry detection, validation, and file layout) is documented on DeepWiki: https://deepwiki.com/prakhardubey2002/hayagriva-llm
Using hayagriva-llm in your package
Add it as a devDependency so your package always ships up-to-date LLM metadata.
1. Install
npm install -D hayagriva-llm2. Generate metadata (manual or script)
From your package root:
# Static mode — no API key; uses ts-morph on your entry file
npx hayagriva-llm generate
# AI mode — set OPEN_ROUTER_API_KEY in .env first
npx hayagriva-llm generate --mode aiThis writes llm.package.json and llm.package.txt in the current directory. Commit them so consumers and tooling (e.g. Cursor, Antigravity) can use them.
3. Add an npm script (optional)
In your package.json:
{
"scripts": {
"llm:generate": "hayagriva-llm generate",
"prepublishOnly": "npm run llm:generate"
}
}llm:generate— run whenever you want to refresh metadata.prepublishOnly— regenerates metadata beforenpm publishso the published package always has current exports.
For AI mode in scripts, ensure OPEN_ROUTER_API_KEY (and optionally OPEN_ROUTER_MODEL) are set in your environment or in a .env file. The CLI loads .env via dotenv automatically.
Automating with Husky
Use Husky to run hayagriva-llm generate automatically (e.g. before commit) so llm.package.json and llm.package.txt stay in sync without manual runs.
1. Install Husky
npm install -D husky
npx husky initThis creates .husky/ and a default pre-commit hook.
2. Hook: regenerate metadata before commit
Edit .husky/pre-commit so it runs the generator and re-stages the output:
# Regenerate LLM metadata (uses .env for OPEN_ROUTER_API_KEY if you use --mode ai)
npx hayagriva-llm generate
# Re-stage generated files so they are included in the commit
git add llm.package.json llm.package.txtStatic mode: No env needed; the hook just runs
hayagriva-llm generate(default mode isstatic).AI mode: Set
OPEN_ROUTER_API_KEY(and optionallyOPEN_ROUTER_MODEL) in.envin the repo root. The CLI loads.envautomatically. Example hook for AI mode:npx hayagriva-llm generate --mode ai git add llm.package.json llm.package.txt
3. Combine with lint / test (optional)
Run lint and tests in the same hook, then generate metadata:
# Example: lint and test first, then regenerate metadata
npm run lint
npm test
npx hayagriva-llm generate
git add llm.package.json llm.package.txtAdjust lint / test to match your package.json scripts.
4. Different hooks to fit your workflow
| Hook | When it runs | Use case |
| ------------ | ------------------------ | --------------------------------------------- |
| pre-commit | Before each commit | Always keep metadata in sync with latest code |
| pre-push | Before each push | Lighter; regenerate only before pushing |
| post-merge | After git pull / merge | Refresh metadata after pulling changes |
Example pre-push (.husky/pre-push):
npm test
npx hayagriva-llm generate
git add llm.package.json llm.package.txtAutomating with GitHub Actions
Run hayagriva-llm generate in CI to validate that metadata is present and up to date, or to publish it as an artifact.
Example: check metadata on push/PR
Create .github/workflows/llm-metadata.yml:
name: LLM metadata
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
generate-and-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install hayagriva-llm
run: npm install -D hayagriva-llm
- name: Generate LLM metadata (static)
run: npx hayagriva-llm generate
- name: Check metadata is committed
run: |
git diff --exit-code llm.package.json llm.package.txt || \
(echo "::error::llm.package.json or llm.package.txt are out of date. Run: npx hayagriva-llm generate" && exit 1)This fails the workflow if someone forgets to run the generator after changing exports.
Example: generate with AI in CI (optional)
If you want AI mode in CI, add your OpenRouter key as a repo secret (e.g. OPEN_ROUTER_API_KEY) and run:
- name: Generate LLM metadata (AI)
env:
OPEN_ROUTER_API_KEY: ${{ secrets.OPEN_ROUTER_API_KEY }}
run: npx hayagriva-llm generate --mode aiThen use the same “check metadata is committed” step, or upload llm.package.json / llm.package.txt as artifacts.
To use Free LLM Router in CI, add a second secret for FREE_LLM_ROUTER_API_KEY and run npx hayagriva-llm generate --freellmrouter (still set OPEN_ROUTER_API_KEY for OpenRouter).
Docs
Full documentation: https://deepwiki.com/prakhardubey2002/hayagriva-llm
| Page | Description |
| -------------------------------------------------------------------------- | ----------------------------------------------- |
| Introduction | Get started, install, options |
| Flow & architecture | End-to-end pipeline and Mermaid diagrams |
| Schema | llm.package.json and llm.package.txt format |
| AI mode | Multi-step AI flow and guardrails |
Documentation: https://deepwiki.com/prakhardubey2002/hayagriva-llm
Documentation Hosting
- View docs: https://deepwiki.com/prakhardubey2002/hayagriva-llm
- Docs are hosted on DeepWiki (see link above).
- Keep updated: Update source in the repo; consult the DeepWiki link above for the latest docs.
DeepWiki documentation: https://deepwiki.com/prakhardubey2002/hayagriva-llm
Pre-commit (Husky)
This repo uses Husky for pre-commit hooks. On commit, the hook runs:
- Lint —
npm run lint(ESLint onsrc/andtest/) - Test —
npm test(Vitest) - Build —
npm run build - Size limit —
npx size-limit(checksdist/cli.cjsanddist/cli.mjsstay under 50 kB)
Install once: npm install. The prepare script runs husky so the .husky/pre-commit hook is installed.
Publishing to npm
- Login —
npm login(create an account at npmjs.com if needed). - Publish — From the package root run:
npm publishprepublishOnlywill run lint, tests, and build before publishing. Only thedist/folder is included (filesin package.json); README and LICENSE are included by npm by default.
Repository: github.com/prakhardubey2002/hayagriva-llm · npm: hayagriva-llm. For a scoped package (e.g. @your-org/hayagriva-llm), set "name": "@your-org/hayagriva-llm" and run npm publish --access public.
