npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

claude-tokenwise-cli

v1.0.3

Published

Interactive cost-aware workflow for Claude Code

Readme

claude-tokenwise (ctw)

It's easy to burn through a Claude Code context window without realizing it. ctw is an interactive wrapper around claude with a mode picker, session manager, and token tracker to keep usage visible as you work.

Demo

ctw demo

ctw demo 2

Installation

npm install -g claude-tokenwise-cli

Requires Claude Code (claude CLI) installed and authenticated.

Or run without installing:

npx claude-tokenwise-cli

Usage

ctw          # Start or resume a session
ctw -h       # Open session manager

Built-in keywords (in the prompt loop)

| Keyword | Action | |---|---| | ctwhelp | Show available keywords | | ctwcost | Show session token stats | | ctwmodel | Show/change model | | ctwmode | Set a default mode (skip picker) | | ctwclear | Fresh Claude context, keep session | | ctwhistory | Open session manager mid-session | | ctwquit or quitctw | Exit without saving the prompt |

Tab autocomplete is available — start typing ctw and press Tab.

Session Modes

Pick a mode each prompt to influence how Claude approaches the task:

| Mode | Behavior | Token cost | |---|---|---| | Quick | Direct, minimal — avoids tangents and unnecessary reads | Lowest | | Normal | Standard workflow — reads relevant files, gives brief updates | Moderate | | Deep | Thorough — full file reads, explains reasoning, runs tests if available | Highest |

Token Tracking

1. Response estimate (est. ~X tokens)
Calculated from response character count using Anthropic's rough rule of thumb: characters ÷ 3.5. This is an intentional approximation. It only covers output text and does not include input tokens, system prompt, CLAUDE.md, or tool call overhead. The actual formula in lib/tracker.js:

Math.round(text.length / 3.5 / 10) * 10

Why it fluctuates: tokenization isn't linear. Common English words are often 1 token each; rare or technical terms get split into sub-tokens; code, whitespace, and punctuation follow entirely different patterns. Treat the estimate as a directional signal, not an exact count.

2. Context window usage (context: X / 200k)
After each response, ctw silently runs /context inside the same Claude session and parses the actual reported usage (e.g. 6.2k / 200k (3%)). This is exact, because Claude itself reports it. It reflects the full window: system prompt, tools, memory files, messages, and free space.

The running total (total: ~Y tokens) accumulates the response estimates across all prompts in the session, useful for a rough sense of session cost over time, even though the context window figure is more accurate per-request.

Model + Mode Comparison

Quick = max brevity, skip explanations | Normal = standard workflow | Deep = thorough, explain reasoning

Prompt used:

Refactor this JavaScript function to use async/await instead of callbacks,
add error handling, and explain your changes:

function fetchData(url, callback) {
  const xhr = new XMLHttpRequest();
  xhr.open('GET', url);
  xhr.onload = function() { callback(null, JSON.parse(xhr.responseText)); };
  xhr.onerror = function() { callback(new Error('Failed')); };
  xhr.send();
}

| Model | Quick | Normal | Deep | |---|---|---|---| | Haiku 4.5 | 6.2s · ~190 tokens | 7.1s · ~400 tokens | 9.7s · ~620 tokens | | Sonnet 4.6 | 6.0s · ~100 tokens | 9.1s · ~280 tokens | 15.7s · ~610 tokens | | Opus 4.6 | 4.8s · ~100 tokens | 7.1s · ~270 tokens | 20.7s · ~920 tokens |

Switch models with ctwmodel. Effort level (low/medium/high) available for Sonnet and Opus.

License

MIT