gramatr
v0.3.65
Published
grāmatr — context engineering layer for AI coding agents. Every prompt gets a pre-computed intelligence packet: decision routing, capability audit, behavioral directives, memory pre-load, and ISC scaffolds. Continuity across sessions for Claude Code, Code
Downloads
2,762
Maintainers
Readme
gramatr
Intelligence layer for AI coding agents. Pre-classifies every request so Claude Code, Codex, and Gemini CLI spend tokens on your work, not on routing overhead.
What it does
gramatr sits between you and your AI agent. Before the agent sees your prompt, gramatr's decision router (BERT classifiers running in <5ms) pre-classifies:
- Effort level — instant, fast, standard, extended, advanced, deep, comprehensive
- Intent type — search, retrieve, create, update, analyze, generate
- Skill matching — which of 25 capabilities are relevant
- Memory tier — what context to pre-load from your knowledge graph
- Reverse engineering — what you want, what you don't want, gotchas
The agent receives this as a pre-computed intelligence packet, saving ~2,700 tokens per request that would otherwise be spent on the agent figuring out what you need.
Install
npx gramatr install claude-codeThat's it. One command installs hooks, registers the MCP server, and configures your AI agent.
Non-interactive (CI, remote provisioning)
npx gramatr install claude-code --yes --name "Your Name" --timezone "America/Chicago"Supported platforms
| Platform | Status | Install |
|---|---|---|
| Claude Code | Stable | npx gramatr install claude-code |
| OpenAI Codex | Stable | Auto-detected during install |
| Google Gemini CLI | Stable | Auto-detected during install |
| Claude Desktop | Coming soon | — |
| ChatGPT Desktop | Coming soon | — |
How it works
You type a prompt
|
gramatr hooks intercept (UserPromptSubmit)
|
Decision router classifies in <5ms (BERT on GPU)
|
Intelligence packet injected into agent context
|
Agent receives pre-classified request
= skips routing overhead
= starts working immediately
= saves ~2,700 tokens per requestThe flywheel
Every interaction trains the classifier. Memory powers routing, routing generates patterns, patterns improve routing. The system gets smarter with use.
Architecture
gramatr is a thin client + smart server:
- Client (this package): 29 files, ~290KB. Hooks into your AI agent, forwards to server.
- Server: Decision routing engine (BERT + Qwen), knowledge graph (PostgreSQL + pgvector), pattern learning.
- Protocol: MCP (Model Context Protocol) for Claude Code/Codex, REST API for web/mobile.
The client never stores intelligence locally. The server delivers everything: behavioral rules, skill routing, agent composition, ISC scaffolds, capability audits.
What gets installed
~/gmtr-client/ # Client runtime
hooks/ # 8 lifecycle hooks
core/ # Shared routing + session logic
bin/ # Status line, login, utilities
CLAUDE.md # Minimal behavioral framework
~/.claude/settings.json # Hook configuration (merged, not overwritten)
~/.claude.json # MCP server registration
~/.gmtr.json # Auth token (canonical source)Commands
gramatr install # Interactive target selection
gramatr install claude-code # Install for Claude Code
gramatr detect # Show detected AI platforms
gramatr doctor # Health check (coming soon)
gramatr uninstall # Clean removal
gramatr --version # Show versionRequirements
- Node.js 20+
- One of: Claude Code, OpenAI Codex, Google Gemini CLI
Links
License
MIT
