@humancontext/intent-engine
v0.1.1
Published
Human-intent extraction from AI coding sessions. MCP server + CLI for Claude Code, Cursor, Windsurf, and more.
Maintainers
Readme
@humancontext/intent-engine
Know what you decided — not what the AI said.
An MCP server + CLI that reads your AI coding sessions and extracts human intent: what you were doing, what approaches you tried, what you accepted and rejected, and why.
Works with Claude Code, Cursor, Windsurf, Copilot, and 20+ other AI coding tools.
Quick Start — MCP Server
Add to your Claude Code config (~/.claude.json or project .mcp.json):
{
"mcpServers": {
"hc-intent": {
"command": "npx",
"args": ["-y", "@humancontext/intent-engine", "mcp"]
}
}
}Then in any Claude Code session, ask:
"Where did I leave off yesterday?"
The MCP server auto-discovers your recent sessions and runs 4-layer intent analysis.
MCP Tools
| Tool | What It Does |
|------|-------------|
| get_context | "Where did I leave off?" — finds and analyzes your most recent session |
| analyze_session | Full 4-layer analysis on any session file |
| list_sessions | Discover all AI coding sessions on your machine |
Quick Start — CLI
npx @humancontext/intent-engine ./session.jsonlOutput:
═══════════════════════════════════════════════════════════
HumanContext Intent Analysis
═══════════════════════════════════════════════════════════
Session: abc123
Events: 1568 (from 2400 raw lines)
Duration: 45m 12s
Analyzed: 154ms
── Layer 1: Activity Classification ──
Navigation: 312 (20%) [28% of time]
Editing: 890 (57%) [52% of time]
Debugging: 95 (6%) [8% of time]
Testing: 42 (3%) [2% of time]
Refactoring: 28 (2%) [1% of time]
Comprehension: 180 (11%) [9% of time]
Other: 21 (1%) [0% of time]
── Layer 2: Cognitive Phases ──
Transitions: 12 | Iteration cycles: 3
Dominant: code_editing
── Layer 3: Intent Narrative ──
Summary: Developer debugged a race condition in the payment service.
Tried 3 approaches. Rejected mutex (latency), rejected retry (complexity),
chose event sourcing after consulting Stripe docs.
Debugging: severity=major, root=logic, reproducibility=intermittent
── Layer 4: Decision Capture ──
Total decisions: 23
Rejected: 8 | Accepted: 15
Top rejection: "AI suggestion ignored system constraints"
═══════════════════════════════════════════════════════════CLI Options
hc-intent <file> # Full analysis
hc-intent <file> --json # JSON output
hc-intent <file> --summary # One-line summary
hc-intent <file> --layer 3 # Single layer (1-4)
hc-intent <file> --stats # Quick statsWhat It Extracts
Layer 1 — Activity Classification Every event is classified into one of 7 categories: navigation, editing, debugging, testing, refactoring, comprehension, or other. Classification uses pre-compiled keyword sets and tool-call heuristics with 95%+ accuracy. Includes both count-based and time-based distribution percentages.
An optional confidence-gated LLM classifier hook handles low-confidence events: when heuristic confidence drops below 0.65, an external LLM can reclassify the event. This is the first developer activity classifier to use confidence-gated LLM fallback.
Layer 2 — Cognitive Phases Groups activities into cognitive workflow stages (understanding, hypothesizing, experimenting, verifying). Uses phase hysteresis (MIN_PHASE_EVENTS = 3) to filter noise and prevent phase-thrashing. Detects iteration cycles and tracks phase transitions.
Layer 3 — Intent Narrative Human-readable summary: what the developer was doing, what files were involved, what approaches were tried and why they were accepted or rejected.
Includes 10 debugging dimensions (based on IEEE 1044, IBM ODC, and Sentry/Datadog taxonomy research):
| Dimension | What It Captures | |-----------|-----------------| | what | What is broken | | which | Which component/function | | how | How the error manifests | | who | Who triggered it (developer, CI, user) | | when | When it first appeared | | where | Which file/module | | severity | critical / major / minor / cosmetic | | rootCause | logic / data / integration / config / dependency | | reproducibility | always / intermittent | | context | regression / new_feature / config_issue / performance / refactor |
All dimensions return unknown when signal is insufficient — they fire when data supports them.
Layer 4 — Decision Capture Every point where the developer accepted, rejected, or redirected an AI suggestion, with rationale categorization and confidence scores.
Supported Formats
| Format | Source Tools | Auto-Detection |
|--------|------------|----------------|
| Claude Code JSONL | Claude Code | .jsonl with message.role + uuid |
| CASS JSON/JSONL | 13+ agents (Claude Code, Cursor, Windsurf, Aider, etc.) | agent_slug + messages[] |
| Agentlytics JSON | 16+ editors (Cursor, Windsurf, Copilot, etc.) | source + messages[] |
| Agentlytics SQLite | Same as above (local cache) | .db / .sqlite extension |
Format is auto-detected. No configuration needed.
Session Discovery
The engine includes multi-tool session discovery that searches standard locations for all supported tools:
import { discoverSessions, getLatestSessions } from '@humancontext/intent-engine/src/discover.js';
// Find all AI sessions on the machine
const sessions = discoverSessions();
// → [{path, tool, size, modified}, ...]
// Get just the latest per tool
const latest = getLatestSessions(sessions);Searches ~/.claude/projects/, Cursor workspace storage, CASS export directories, and Agentlytics data folders.
Library API
import { analyzeSession } from '@humancontext/intent-engine';
const result = analyzeSession('./session.jsonl');
console.log(result.layer1.distribution.percentages);
// → {navigation: 20, editing: 57, debugging: 6, testing: 3, ...}
console.log(result.layer3.summary);
// → "Developer debugged race condition in payment service..."
console.log(result.layer3.debuggingDimensions);
// → {what: "race condition", severity: "major", rootCause: "logic", ...}
console.log(result.layer4.decisions);
// → [{type: "rejected", rationale: "...", confidence: 0.85}]Async (for SQLite/Agentlytics)
import { analyzeSessionAsync } from '@humancontext/intent-engine';
const result = await analyzeSessionAsync('./agentlytics-cache.db');Adapter API (direct ingestion)
import { ingestSync, detectFormat, SourceFormat } from '@humancontext/intent-engine';
// Auto-detect and parse any format → NormalizedEvent[]
const { events, metadata } = ingestSync('./session-file');Architecture
┌─────────────────┐ ┌──────────────────┐ ┌───────────────────┐
│ Claude Code │ │ CASS (13+ tools) │ │ Agentlytics (16+) │
│ .jsonl │ │ .json / .jsonl │ │ .json / .db │
└────────┬────────┘ └────────┬─────────┘ └────────┬──────────┘
│ │ │
└─────────┬───────────┴───────────┬───────────┘
│ │
detectFormat() ingest() / ingestSync()
│ │
└───────────┬───────────┘
│
┌──────────▼──────────┐
│ NormalizedEvent │
│ (unified schema) │
└──────────┬──────────┘
│
┌────────┬───────┴───────┬────────┐
│ │ │ │
Layer 1 Layer 2 Layer 3 Layer 4
classify phases narrative decisions
(7 cat) (hysteresis) (10 dims) (rationale)Every adapter produces NormalizedEvent objects with a unified schema. The 4-layer analysis runs identically regardless of source tool. Every output includes "generator": "HumanContext Intent Engine v{version}" for provenance tracking.
Security
The MCP server validates all file paths:
- Only absolute paths accepted
- Must be under user home directory
- Path traversal (
../) blocked via normalization - Non-existent files rejected before any I/O
Testing
143 tests, 0 failures:
- Core tests: 45 tests covering all 4 layers, parser, discovery, and edge cases
- Adapter tests: 98 tests covering CASS JSON/JSONL, Agentlytics JSON, format detection, and unified ingestion
npm test # Run all tests
npm run test:core # Core tests only
npm run test:adapters # Adapter tests onlySee TESTING.md for the test methodology and contribution guide.
Contributing
PRs welcome for: classification improvements, new adapter formats, CLI enhancements, and bug fixes.
We build on top of these open-source projects — contributions there benefit everyone:
- CASS — Unified AI session schema (13+ agents)
- Agentlytics — Editor session analytics (16+ editors)
License
Apache 2.0 — see LICENSE.
Built by HumanContext.ai — Mission control for developers using AI coding tools.
