emergent-thinking
v0.2.4
Published
CLI-first local attention engine for project-scoped AI reasoning
Maintainers
Readme
emergent-thinking
emergent-thinking is a project-local skill + CLI for coding agents.
In plain language: it tries to stop the model from improvising everything in its head, and forces it to write the work down, validate against reality, and iterate in loops.
What It Is
emergent-thinking is a local board workflow for AI agents.
It combines:
- a
skills-compatible skill that tells the agent how to work - a project-local CLI that stores the board and exposes the protocol
The skill is the working method. The CLI is the external memory and move engine.
This is intentionally not a global MCP memory service.
The natural scope is the current repository and its cwd.
Why It Exists
Most coding agents fail in a predictable order:
- they form a rough intuition
- they start implementing too early
- they discover conflicts too late
- they patch over the mess
- entropy keeps increasing
emergent-thinking tries to change that order.
The core loop is:
- write the problem down
- write the goal down
- choose one bounded path
- validate against reality early
- record findings and proof debt
- repair the cheapest meaningful stage
- loop until the board converges
So the promise is not "the model becomes smarter". The promise is "the model is pushed into a more reliable working order".
How To Use
Install it with:
npx emergent-thinking installor:
bunx emergent-thinking install
pnpx emergent-thinking installThe normal usage is not "manually type lots of CLI commands".
The normal usage is:
- install the skill
- tell your coding agent to use
emergent-thinking - let the skill drive the workflow
- let the CLI persist the board inside the current project
If you want the actual operating protocol, read:
- skills/emergent-thinking/SKILL.md
- skills/emergent-thinking/references/board-protocol-rules.md
- skills/emergent-thinking/references/how-to-use-subagents.md
If you want contribution, benchmark reproduction, or protocol iteration guidance, read:
Benchmark Snapshot
Latest checked-in protocol_micro summary from evals/inspect_ai:
- On
anthropic/glm-5.1,baselinescored0.875,sequentialscored0.875,emergentscored1.0, andemergent_compactalso scored1.0while using1,876fewer total tokens thanemergent.
This is not a universal proof. It is one concrete signal that the protocol can improve move ordering, and that prompt compaction can sometimes preserve quality while reducing cost.
Repository Pointers
- Skill: skills/emergent-thinking/SKILL.md
- Core protocol rules: skills/emergent-thinking/references/board-protocol-rules.md
- Subagent extension: skills/emergent-thinking/references/how-to-use-subagents.md
- Contribution and benchmark reproduction: CONTRIBUTING.md
