@oldskultxo/cce
v1.0.0
Published
Codex Context Engine CLI runtime layer for Codex CLI
Downloads
118
Readme
cce — Codex with adaptive context
Codex is powerful.
But it forgets everything between runs.
cce is an experiment to fix that.
It is designed to reduce wasted context, reuse memory across runs, and make Codex workflows more consistent inside a real repository.
Why cce
Codex is powerful, but raw Codex usage is still mostly stateless:
- you repeat repository context
- you lose previous decisions between runs
- you oversend context for simple tasks
- you undersend context for complex tasks
cce adds:
- adaptive context selection:
direct,light,full - cheap-first escalation when low-context execution is insufficient
- persistent repository memory
- path-aware memory retrieval
- project facts detection
- failure-aware retry behavior
- context telemetry and learning
- automatic model routing by task/context level
- knowledge mods in
.codex_context_engine/library/ - communication modes including caveman
What you get
- less manual prompt setup
- less repeated repository context
- better reuse of prior work
- safer long-running Codex executions
- project-local runtime scaffolding
Positioning
cce is not a replacement for Codex.
It is a runtime layer for Codex:
- Codex still does the execution
ccedecides how much context to sendccestores reusable memory and runtime state
Install
npm install -g @oldskultxo/cce
npm install -g @openai/codexFirst-time setup
cd /path/to/your/project
cce init
cce auth login
cce auth statusThen run:
cce run "inspect the auth flow and suggest improvements"Quick examples
Run a task:
cce run "implement JWT middleware and update login route"Run from a prompt file:
cce run @prompt.mdAsk for a plan:
cce plan "design a migration plan for the settings API"Force minimal context:
cce run --context direct "explain this regex"Force full engine context:
cce run --context full @roadmap.mdInspect runtime health:
cce doctorCore concepts
1. Adaptive context
cce chooses one of:
direct— near-raw prompt, minimal overheadlight— compact packet, small memory slicefull— full engine packet, project facts, selected memory, mods
This is the main mechanism for reducing token waste.
When auto is enabled, cce can start cheaper and escalate only if the task clearly needs more context.
2. Persistent memory
cce stores reusable execution history, facts, and failure patterns in repository-local runtime folders.
This lets later runs reuse signal instead of rebuilding everything from scratch.
3. Project facts
cce detects stack, package manager, important files, source folders, test folders, and entry points.
These facts are only injected when they are likely to help.
4. Mods
Reusable knowledge lives in:
.codex_context_engine/library/mods/<mod_name>/Use mods to organize domain-specific instructions, references, and patterns.
5. Communication modes
Communication mode affects how results are communicated, not the reasoning goal.
Modes:
normalcaveman_litecaveman_fullcaveman_ultra
Runtime scaffold
cce init creates and repairs a local runtime scaffold such as:
.codex_context_engine/
config.json
state.json
tmp/
rules/
memory/
planner/
cost/
task_memory/
failure_memory/
memory_graph/
metrics/
global_metrics/
library/
AGENTS.md
CONTEXT_SAVINGS.mdThese are runtime artifacts and should stay out of version control.
Documentation
Detailed command usage lives under docs/.
Getting started
- Getting started
- Authentication
- Defaults
- Context strategies
- Execution policies
- Model routing
- Communication modes
- Telemetry and savings
- Mod library
- Mod retrieval
Command reference
cce initcce authcce runcce plancce doctorcce cavemancce learncce savings_reportcce user_preferencecce codex
Current status
cce is usable now, but it should still be considered an evolving product.
This is an early version but it already provides:
- a publishable package shape
- adaptive context routing
- repository-local memory
- safer execution controls
- structured command help and docs
The next level is proving optimization with hard measurements across real workloads.
- deeper retrieval
- better mod usage
- stronger planning layer
Feedback is very welcome.
Contributing
Contributions welcome.
Focus areas:
- savings reports
- optimization strategies
- failure handling improvements
- evolving features
- comments
