npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@felkot/think-mcp

v1.1.4

Published

MCP Server for structured sequential thinking with Burst Thinking, Logic Methodology Generator, branching, revisions, dead-ends tracking, and fuzzy recall

Downloads

563

Readme

Think MCP

Reasoning control layer for any MCP client.

npm version License: MIT MCP

Think MCP does not replace your model. It enforces a structured thinking workflow so both thinking and non-thinking models produce better, safer, and more consistent results.


Why use Think MCP? 🧠

Many models are fast but unstable on medium/heavy tasks. Think MCP adds a practical quality loop:

  • structured reasoning with validation,
  • branch exploration when confidence is low,
  • synthesis gate before final answer,
  • objective quality scoring.

New in current development:

  • confidence calibration (predicted vs actual confidence),
  • auto-diverge trigger when confidence drops too low.

Works across domains:

  • frontend, backend, fullstack,
  • finance, crypto, math,
  • game dev, game modding,
  • web, data, devops,
  • and general problem solving.

Changelog 📦

Changed

  • Added optional surface hint to think_logic for ui/api/cli/worker/bot/plugin/desktop/mobile/game.
  • Added optional domainPack presets for trace-first chains: web-fullstack, telegram-bot, vk-bot, game-runtime, game-bot, payments, trading, defi.
  • Improved event-flow guidance for mixed-language and non-web flows.
  • Added real MCP stdio smoke coverage for think_logic.
  • Updated README examples for surface- and domainPack-aware prompts.

Why it is better

  • Better release guidance for models that need explicit runtime context.
  • Better coverage outside website-only scenarios.
  • Better end-to-end tracing for domain-specific chains like UI -> store -> request -> route -> service -> persistence -> user outcome.
  • Better confidence that the published MCP contract matches runtime behavior.

Changed

  • Full README rewrite with improved structure and scanability.
  • Changelog moved near the top for npm visibility.
  • Features table upgraded with visual emoji markers.
  • Added dedicated Mini Prompt section.
  • OpenCode example key switched to think-mcp for copy-paste usage.

Why it is better

  • Faster onboarding.
  • Fewer setup mistakes.
  • Clearer release communication on npm.

Added

  • qualityScore in think_done:
    • overall (0-100)
    • grade (A/B/C/D/F)
    • breakdown: coherence, riskCoverage, evidenceDiscipline, executionReadiness, flowIntegrity
    • nextActions for practical remediation
  • analysisMode in think_logic:
    • standard
    • event-flow (trigger-to-outcome mapping)
  • Event-flow map sections:
    • Trigger
    • Bindings
    • Call Chain
    • State Mutations
    • Side Effects
    • User Outcome
    • Failure Path
    • Unknown Nodes
    • Verification Plan

Changed

  • think_done output now surfaces quality metrics directly.
  • README expanded with tool-by-tool schemas and best practices.
  • Added OpenCode local MCP setup example with @latest.

Fixed

  • Improved visibility of final reasoning quality with machine-readable scoring.
  • Added explicit post-analysis guidance via nextActions when score is weak.
  • Reduced false confidence by penalizing incomplete chains (flowIntegrity).
  • Closed a practical gap where trigger-driven flows were not explicitly mapped end-to-end.

Why it is better

  • Better for non-thinking models: stronger guardrails before finalization.
  • Better for thinking models: clearer calibration and measurable completion criteria.
  • Better for any MCP client: quality signal is structured, not UI-dependent.

Added

  • Domain-aware profiles in think_logic: frontend, backend, fullstack, game-dev, game-modding, finance, crypto, math, web, data, devops, general
  • complexityMode: auto | simple | medium | heavy
  • tokenDiscipline: strict | balanced | exploratory
  • Domain checklist and token policy sections in generated methodology.

Why it matters

  • Extends one MCP to many task types without tool sprawl.
  • Improves token efficiency by matching depth to actual complexity.

Added

  • Stable Think MCP core workflow.
  • Heavy-task validation gates.
  • Context safety hardening and sensitive-data redaction.
  • Sanitized export/insights pipeline.

Features ✨

| Feature | What it gives you | Why it matters | | :--- | :--- | :--- | | 🧭 Sequential reasoning | Step-by-step thought chain | Reduces random jumps and missing logic | | ✅ Quality gate (think_done) | Ready/Blocked + score + next actions | Prevents premature final answers | | 🎯 Confidence calibration | Predicted vs actual confidence + delta | Reduces overconfidence and improves self-checking | | 🔗 Flow integrity scoring | Checks chain completeness | Catches broken trigger-to-outcome paths | | ⚡ Event-flow mode | Trigger -> bindings -> call chain -> outcome | Great for click/submit/api/webhook/cron scenarios | | 🌍 Domain routing | Domain-aware checklists | Better guidance without adding more tools | | 📏 Complexity routing | simple / medium / heavy | Saves tokens on easy tasks, deepens hard ones | | 💸 Token discipline | strict / balanced / exploratory | Controls verbosity and cost | | 🧠 Recall memory | Search session or cross-session insights | Reuse past lessons, avoid repeating errors | | 🔒 Safe context handling | Redaction + sensitive file blocking | Reduces accidental leaks |


Quick Start 🚀

Run directly:

npx @felkot/think-mcp@latest

Generic MCP config:

{
  "mcpServers": {
    "think": {
      "command": "npx",
      "args": ["-y", "@felkot/think-mcp@latest"]
    }
  }
}

OpenCode local MCP example:

{
  "mcp": {
    "think-mcp": {
      "type": "local",
      "command": [
        "npx",
        "-y",
        "@felkot/think-mcp@latest"
      ],
      "environment": {},
      "enabled": true
    }
  }
}

Example with environment overrides:

{
  "mcp": {
    "think-mcp": {
      "type": "local",
      "command": [
        "npx",
        "-y",
        "@felkot/think-mcp@latest"
      ],
      "environment": {
        "THINK_MCP_STORAGE_DIR": "D:\\mcp-data",
        "THINK_MCP_AUTO_DIVERGE_CONFIDENCE_THRESHOLD": "6",
        "THINK_MCP_CALIBRATION_TREND_WINDOW": "12"
      },
      "enabled": true
    }
  }
}

Mini Prompt for Any AI 🧩

Use this prompt with models that should follow Think MCP workflow:

You are a disciplined assistant.
Use Think MCP to improve reasoning quality without wasting tokens.

Rules:
1) Start non-trivial tasks with think_logic.
2) Match effort to complexity (simple/medium/heavy).
3) Use think for sequential execution and verification.
4) If confidence < 5 or trade-offs are unclear, use think_diverge.
5) For trigger chains (click/submit/api/webhook/cron), use analysisMode=event-flow.
6) Before final output, call think_done and inspect qualityScore.
7) If blocked or score is weak, continue reasoning before concluding.

Tool Catalog 🛠️

| # | Tool | Role | | :---: | :--- | :--- | | 1 | think | Add one reasoning step with confidence, substeps, alternatives, and optional context snapshots | | 2 | think_batch | Submit complete reasoning chain in one call | | 3 | think_done | Final synthesis gate with readiness + quality score | | 4 | think_recall | Search session memory or cross-session insights | | 5 | think_reset | Clear current session state | | 6 | think_logic | Generate domain-aware methodology with complexity/token controls | | 7 | think_diverge | Create 2-3 alternative branches from one thought | | 8 | get_model_instructions | Return built-in disciplined model instructions |


Tool Details (Best Practices + Schema) 📘

1) think

Single step in a reasoning chain.

Best practices:

  • Use for non-trivial tasks where traceability matters.
  • Set goal in the first thought and keep it stable.
  • Keep subSteps short and executable.
  • If confidence drops, use think_diverge.
  • Use includeContextContent: true only when really needed.
{
  thought: string,
  nextThoughtNeeded: boolean,
  thoughtNumber: number,
  totalThoughts: number,
  phase?: 'initialization' | 'analysis' | 'strategy' | 'execution' | 'verification' | 'conclusion',
  context_files?: string[],
  includeContextContent?: boolean,
  confidence?: number,
  subSteps?: string[],
  alternatives?: string[],
  goal?: string,
  quickExtension?: {
    type: 'critique' | 'elaboration' | 'correction' | 'alternative_scenario' | 'assumption_testing' | 'innovation' | 'optimization' | 'polish',
    content: string,
    impact?: 'low' | 'medium' | 'high' | 'blocker'
  },
  isRevision?: boolean,
  revisesThought?: number,
  branchFromThought?: number,
  branchId?: string,
  showTree?: boolean
}

2) think_batch

Submit a complete chain in one call.

Best practices:

  • Use when you already have a full structured chain.
  • Great for import/migration of offline reasoning.
  • Keep thoughts concise; avoid repeated content.
{
  goal: string,
  thoughts: Array<{
    thoughtNumber: number,
    thought: string,
    confidence?: number,
    subSteps?: string[],
    alternatives?: string[],
    isRevision?: boolean,
    revisesThought?: number,
    branchFromThought?: number,
    branchId?: string,
    extensions?: Array<{
      type: 'critique' | 'elaboration' | 'correction' | 'alternative_scenario' | 'assumption_testing' | 'innovation' | 'optimization' | 'polish',
      content: string,
      impact?: 'low' | 'medium' | 'high' | 'blocker'
    }>
  }>,
  consolidation?: {
    winningPath: number[],
    summary: string,
    verdict: 'ready' | 'needs_more_work'
  },
  showTree?: boolean
}

3) think_done

The synthesis gate. Decides if final answer is ready.

Best practices:

  • Use before final output on medium/heavy tasks.
  • For heavy tasks, include constraintCheck and potentialFlaws.
  • Read qualityScore.nextActions and continue if blocked.

Quality score includes:

  • coherence
  • riskCoverage
  • evidenceDiscipline
  • executionReadiness
  • flowIntegrity

Also returns confidence calibration (when session has confidence values):

  • predicted (average self-confidence),
  • actual (qualityScore.overall / 10),
  • delta, status, and recommendation.

And confidence calibration trend:

  • direction (stable | improving | worsening),
  • averageDelta,
  • counts: overconfident, underconfident, aligned.
{
  winningPath: number[],
  summary: string,
  verdict: 'ready' | 'needs_more_work',
  constraintCheck?: string,
  potentialFlaws?: string,
  exportReport?: 'markdown' | 'json',
  includeMermaid?: boolean
}

4) think_recall

The Memory Bank. Search current session or past insights.

Best Practices:

  • BEFORE complex_task -> Check scope: 'insights'
  • IF repeating_logic -> Check session for dead ends
  • IF unsure -> Verify established context
{
  query: string,
  scope?: 'session' | 'insights',
  searchIn?: 'thoughts' | 'extensions' | 'alternatives' | 'all',
  limit?: number,
  threshold?: number
}

5) think_reset

Clears current session state.

Best practices:

  • Use for fully unrelated tasks.
  • Do not use mid-execution when a branch/revision is enough.
{}

6) think_logic

Generates methodology before execution.

Best practices:

  • Start medium/heavy tasks with this tool.
  • Use analysisMode: 'event-flow' for interaction chains.
  • Use complexityMode: 'auto' unless you need strict control.
  • Use surface when the runtime shape matters more than the wording of the prompt.
  • Use domainPack when you know the environment and want stronger stage-by-stage checkpoints without losing the full trigger-to-outcome chain.
  • Keep prompt phrasing in English when possible for the clearest tool-facing instructions.
{
  target: string,
  context?: string,
  domainPack?: 'web-fullstack' | 'telegram-bot' | 'vk-bot' | 'game-runtime' | 'game-bot' | 'payments' | 'trading' | 'defi',
  surface?: 'generic' | 'ui' | 'api' | 'cli' | 'worker' | 'bot' | 'plugin' | 'desktop' | 'mobile' | 'game',
  analysisMode?: 'standard' | 'event-flow',
  domain?: 'general' | 'frontend' | 'backend' | 'fullstack' | 'game-dev' | 'game-modding' | 'finance' | 'crypto' | 'math' | 'web' | 'data' | 'devops',
  complexityMode?: 'auto' | 'simple' | 'medium' | 'heavy',
  tokenDiscipline?: 'strict' | 'balanced' | 'exploratory',
  depth?: 'quick' | 'standard' | 'deep',
  focus?: Array<'security' | 'performance' | 'reliability' | 'ux' | 'architecture' | 'data-flow' | 'testing'>,
  stack?: Array<'nestjs' | 'prisma' | 'ts-rest' | 'react' | 'redis' | 'zod' | 'trpc' | 'nextjs' | 'drizzle' | 'hono'>
}

Examples:

  • surface: 'ui' for click/tap/desktop/mobile interaction chains
  • surface: 'api' for route/controller/request/response tracing
  • surface: 'cli' for command/subcommand/stdout/exit-code flows
  • surface: 'worker' for queue/cron/background pipelines
  • surface: 'bot' or surface: 'plugin' for chat adapters and host-hook integrations
  • domainPack: 'web-fullstack' for UI/store/request/backend/response chains like add-to-cart or checkout
  • domainPack: 'telegram-bot' | 'vk-bot' for update/router/handler/reply flows
  • domainPack: 'payments' | 'trading' | 'defi' for irreversible or contract-heavy finance/crypto flows

7) think_diverge

Create alternative branches from one thought.

Best practices:

  • Use when confidence < 5.
  • Use when trade-offs are unclear.
  • Keep branches distinct, not cosmetic rewrites.

Auto-trigger note:

  • In think, if confidence is below threshold and no alternatives are present, the tool recommends think_diverge automatically and sets next: think_diverge.
  • Threshold is configurable via THINK_MCP_AUTO_DIVERGE_CONFIDENCE_THRESHOLD (1..10).
{
  thoughtNumber: number,
  alternatives: string[]
}

8) get_model_instructions

Returns built-in deep-thinking model instructions.

Best practices:

  • Use to align third-party/non-thinking models with Think MCP workflow.
{}

Event-Flow Analysis (Trigger → Outcome) 🔗

When behavior starts from an interaction/event, use:

analysisMode: 'event-flow'

Typical traces:

  • UI click -> handler -> state update -> API -> service -> DB -> UI feedback
  • Form submit -> validation -> transform -> API -> response -> error/success state
  • API request -> middleware -> service -> repo -> side effects -> response
  • Webhook -> signature check -> parser -> dedupe -> processing -> ack
  • Scheduled job -> lock -> pipeline -> persistence -> reporting
  • CLI command -> parser -> handler -> domain logic -> side effects -> stdout/stderr
  • Bot update -> adapter -> handler -> domain logic -> reply/update
  • Plugin hook -> bridge -> callback -> domain logic -> host-visible result

domainPack does not replace the chain. It strengthens the same trigger-to-outcome map with domain-specific checkpoints. Example:

  • web-fullstack: UI event -> component/store action (addToCart(id)) -> request client -> route/controller -> service/use-case -> DB/cache -> response mapper -> rendered cart state
  • telegram-bot: update -> adapter/router -> handler -> domain service -> storage/external API -> reply/edit/send
  • payments: user/payment intent -> idempotency/auth -> API/controller -> orchestration -> PSP/bank -> ledger/order persistence -> receipt/result

Surface hint example:

{
  target: 'Investigate how this flow starts, runs, and reports the final result',
  context: 'Need a clear trigger-to-outcome map',
  surface: 'cli',
  analysisMode: 'event-flow',
  complexityMode: 'medium'
}

Domain pack example:

{
  target: 'Trace add to cart from button click to final cart state',
  context: 'Button click calls addToCart(id) in store and then sends a request to backend cart endpoint',
  domainPack: 'web-fullstack',
  analysisMode: 'event-flow',
  complexityMode: 'medium'
}

The model is forced to map:

  • Trigger
  • Bindings
  • Call Chain
  • State Mutations
  • Side Effects
  • User Outcome
  • Failure Path
  • Unknown Nodes
  • Verification Plan

Quality Score Guide 📊

think_done returns score 0..100 and grade A..F.

Quick interpretation:

| Range | Meaning | Typical action | | :---: | :--- | :--- | | 90-100 | Excellent | Finalize | | 75-89 | Good | Finalize if no blocker warnings | | 60-74 | Needs improvement | Address nextActions first | | 40-59 | Weak | Rework core reasoning path | | 0-39 | Unsafe | Rebuild path before final answer |

High-impact signals:

  • low flowIntegrity: chain likely incomplete/disconnected,
  • low riskCoverage: risks or constraints not handled,
  • low evidenceDiscipline: claims not sufficiently verified.

Recommended Workflows 🧭

Simple task:

  1. think
  2. think_done

Medium task:

  1. think_logic
  2. think (+ alternative if needed)
  3. think_done

Heavy task:

  1. think_logic
  2. think loop + think_diverge
  3. think_recall
  4. think_done with constraintCheck + potentialFlaws
  5. Finalize only when status is ready and score is acceptable

Security & Safety 🔒

  • Workspace-bounded context access
  • Symlink escape protection
  • Sensitive file blocking (.env, keys/certs patterns)
  • Secret-like content redaction
  • Context content is explicit opt-in (includeContextContent: true)
  • Context details safety truncation
  • Sanitized exports and insights persistence

Storage:

  • Active sessions: .think/sessions/active/*.json
  • Archived sessions: .think/sessions/archive/*.json
  • Insights: .think/insights.json
  • Confidence calibration: .think/confidence_calibration.json
  • Optional override: THINK_MCP_STORAGE_DIR

Environment variables:

  • THINK_MCP_STORAGE_DIR - base directory where .think will be created.
  • THINK_MCP_AUTO_DIVERGE_CONFIDENCE_THRESHOLD - low-confidence threshold for auto-diverge (1..10, default 5).
  • THINK_MCP_CALIBRATION_TREND_WINDOW - number of recent calibration records for trend (1..50, default 10).

All overrides are optional:

  • If environment is empty, Think MCP uses safe defaults.
  • Use overrides only when you want custom storage location or tuning behavior.

Development 👨‍💻

npm ci
npm run typecheck
npm test
npm run build
npm run audit:prod

Optional package check:

npm pack --dry-run

License

MIT


Created by FelKot