cc-human
v1.0.0
Published
What does your human actually do during a Claude Code session? Pure-autonomous vs interactive, follow-up patterns, engagement split.
Downloads
22
Maintainers
Readme
cc-human
What does your human actually do during a Claude Code session?
Measures human presence in your sessions: how often Claude runs purely autonomously vs. how often the human sends follow-up messages, and what kind of messages they send.
Usage
npx cc-human
npx cc-human --jsonOr open index.html in a browser and drag in your .jsonl files.
Metrics
- Pure-autonomous rate — sessions where the human writes only the initial prompt
- Interactive rate — sessions with follow-up messages
- Messages per session — median, mean, max
- Initial prompt length — how much context the human provides upfront
- Follow-up types — ack (<30c) / direction (30–149c) / correction (150–599c) / briefing (600+c)
What counts as a "human message"?
Only text content in user-role messages. Tool results (tool_result blocks) are excluded — those are Claude's own output coming back, not human input.
Sample output
cc-human — What does your human actually do?
Sessions analyzed: 183
Human engagement split
███████████░░░░░░░░░░░░░░░░░░░ 35.5% pure-autonomous
███████████████████░░░░░░░░░░░ 64.5% interactive
Human messages per session
median 2 msg | mean 6.1 | max 87
Follow-up message types (941 total, median 42 chars)
ack ██████░░░░░░░░░░░░░░ 26% ok / yes / done
direction ██████████████░░░░░░ 69% brief instructions
correction ░░░░░░░░░░░░░░░░░░░░ 0% multi-sentence
briefing █░░░░░░░░░░░░░░░░░░░ 4% long task dumpLicense
MIT
