@eeshans/howiprompt
v2.3.14
Published
Local-first analytics dashboard for your AI coding assistant conversations
Maintainers
Readme
How I Prompt
Local-first analytics for your AI coding conversations — prompts, personas, trends, and a personal wrapped view.
Live Dashboard • Live Wrapped • Source
npx @eeshans/howipromptRequirements: Node.js 18+. Works with Claude Code, Codex, Copilot Chat, Cursor, LM Studio, Pi, and OpenCode logs. The local package ships with analytics disabled.
Highlights
| | | |---|---| | Local-first pipeline | Sync, parsing, feature extraction, scoring, and metrics run on your machine | | Multi-source support | Claude Code, Codex, Copilot Chat, Cursor, LM Studio, Pi, and OpenCode | | Two ways to explore | Standard dashboard plus a scroll-through wrapped experience | | Metrics that feel personal | Vibe Coder Index, Politeness, activity trends, heatmaps, and personas | | Private by default | Raw logs stay local and the npm package ships with analytics disabled | | Fast repeat refreshes | Incremental rebuilds reuse the local DB, caches, and exclusions |
What it does
How I Prompt syncs local AI conversation logs, computes prompting metrics on-device, and opens a browser dashboard with both an overview page and a scroll-through wrapped experience.
That starts a local server and opens the dashboard. On first run, a setup wizard detects supported backends, lets you confirm sources, and then runs the pipeline.
Options
npx @eeshans/howiprompt --no-open # don't auto-open browser
npx @eeshans/howiprompt --port 4000 # custom port
npx @eeshans/howiprompt --help # usage infoWhat Happens
- Detects supported local backends and writes setup to
~/.howiprompt/config.json - Copies raw conversation data into
~/.howiprompt/raw/ - Parses and stores messages in a local SQLite database at
~/.howiprompt/data.db - Extracts features and scores prompts for dashboard metrics (no model downloads needed)
- Writes
~/.howiprompt/metrics.jsonand serves the dashboard atlocalhost
Subsequent refreshes are incremental and reuse the local database, caches, and configured exclusions.
Self-Host On GitHub Pages
If you want to publish your own stats as a static site:
git clone https://github.com/eeshansrivastava89/howiprompt.git
cd howiprompt && npm install && npm run dev:cli
cd frontend && DEMO_DEPLOY=true npm run buildThen commit and push docs/, and enable GitHub Pages from main / docs in your repo settings.
What You Get
| Dashboard | Full Experience | |-----------|-----------------| | One-page overview of your stats | Scroll-through "Wrapped" presentation | | howiprompt.eeshans.com | howiprompt.eeshans.com/wrapped |
Metrics include: total prompts, conversation depth, activity heatmap, model usage, Vibe Coder Index (with explainability), Politeness (with explainability), persona classification (2×2: Detail Level × Communication Style), and trends.
Data Sources
| Source | Location |
|--------|----------|
| Claude Code | ~/.claude/projects/*.jsonl |
| Codex | ~/.codex/history.jsonl |
| Copilot Chat | macOS: ~/Library/Application Support/Code/… · Windows: %APPDATA%\Code\… · Linux: ~/.config/Code/… |
| Cursor | macOS: ~/Library/Application Support/Cursor/… · Windows: %APPDATA%\Cursor\… · Linux: ~/.config/Cursor/… |
| LM Studio | ~/.lmstudio/conversations |
| Pi | ~/.pi/agent/sessions |
| OpenCode | macOS/Linux: ~/.local/share/opencode/storage · Windows: %LOCALAPPDATA%\opencode\storage |
All supported sources are auto-synced into ~/.howiprompt/raw/ and reused across refreshes.
Backend Status
- Supported today:
Claude Code,Codex,Copilot Chat,Cursor,LM Studio,Pi,OpenCode
Privacy
- Local by default — Sync, parsing, scoring, and metrics run on your machine. No model downloads, no network access required
- Persistent storage — Raw copies, local DB, config, and metrics live under
~/.howiprompt/ - No prompt text leaves your machine — The app does not upload raw logs or prompt content
- No analytics in the local app — The
npx @eeshans/howipromptpackage ships with analytics disabled. PostHog is only enabled on my demo website - Fully offline — Unlike previous versions, no ML model download is required. All scoring uses lightweight pattern matching with learned weights
- Ancillary network requests — The dashboard loads ApexCharts from a CDN. The CLI checks npm for version updates. These do not transmit prompt data
The 4 Personas
Your persona is derived from two independent axes: Detail Level (brief → detailed) and Communication Style (directive → collaborative). These form a 2×2 grid validated on 21k prompts.
- The Commander: Brief + Directive — short, decisive instructions
- The Partner: Brief + Collaborative — quick exchanges, conversational flow
- The Architect: Detailed + Directive — specs, constraints, numbered requirements
- The Explorer: Detailed + Collaborative — context-rich, question-driven investigation
Development
# Install deps
npm install
# Build TypeScript
npm run build
# Run tests
npm test
# Rebuild package assets, then launch the CLI locally
npm run dev:cli
# Build frontend
cd frontend && npm run build
# Build the hosted demo from ~/.howiprompt/metrics.json
cd frontend && DEMO_DEPLOY=true npm run build
# Build for distribution
npm run build:cli
# Privacy gate (run before publish)
npm run check:privacyRequirements
- Node.js 18+
Disclaimer: This is an independent personal project, not affiliated with, endorsed by, or connected to Anthropic or Claude in any way. This is a non-commercial, open-source tool created for personal use and educational purposes only.
License
MIT License — Not affiliated with Anthropic.
