@pmaddire/gcie
v0.1.15
Published
GraphCode Intelligence Engine one-command setup and context CLI
Readme
GraphCode Intelligence Engine (GCIE)
GCIE is a graph-first code intelligence engine that minimizes LLM prompt context.
It is designed for coding-agent workflows where we want to retrieve the smallest useful set of code and operational context instead of reading whole files or whole directories into the model.
How It Works
GCIE is an adaptive context retrieval engine for coding agents.
At a high level:
Index + architecture snapshot
gcie index .scans the repo and builds retrieval artifacts under.gcie/.- GCIE tracks architecture/retrieval state so it can route future queries better.
Query classification
gcie contextclassifies each request by intent and structure (single-file, same-layer pair, cross-layer, multi-hop).
Retrieval routing
- GCIE chooses retrieval strategy (
plain,plain_gapfill,plain_chain, or slices where useful), path scope, token budget, and usage policy (hybrid,force, orminimal/off). --budget autouses built-in heuristics; explicit budgets are available when needed.
- GCIE chooses retrieval strategy (
Gap-fill + must-have recovery
- If expected support files are missing, GCIE runs targeted follow-up retrieval to recover must-have files instead of over-fetching whole repo context.
Adaptation loop (optional but recommended)
gcie adapt .benchmarks repo-local cases, selects per-family methods, and runs efficiency trials under an accuracy gate.- Results are written to
.planning/post_init_adaptation_report.jsonand.gcie/context_config.json.
Fast path for day-to-day use
- After adaptation, most tasks should run through
gcie contextwith small prompt footprints and high recall.
- After adaptation, most tasks should run through
The practical goal is to keep must-have coverage while minimizing token cost.
Quick Start
- Create venv:
.venv\\Scripts\\python.exe -m venv .venv - Install deps as needed (networkx, GitPython, typer):
.venv\\Scripts\\python.exe -m pip install networkx GitPython typer - Run tests:
.venv\\Scripts\\python.exe -m unittest - CLI help:
.venv\\Scripts\\python.exe -m cli.app --help
Easiest Setup In Any Repo
Use this when you want a fast drop-in setup for coding agents.
- Install GCIE CLI in the target repo (via your preferred method: npm link, local wrapper, or direct Python module).
- Copy GCIE_USAGE.md into the target repo root.
- Run one index pass:
gcie.cmd index .
- Start using adaptive retrieval immediately:
gcie.cmd context . "<task>" --intent edit --budget auto --mode adaptive --usage-policy hybrid
No heavy upfront tuning is required. The workflow starts portable-first and only adds local overrides after repeated miss patterns.
One-command repo bootstrap:
npx -y @pmaddire/gcie@latest setup .
This creates .gcie architecture tracking files, copies portable agent workflow docs, and runs an initial index pass.
Canonical Retrieval Protocol (2026-03)
Default protocol is now adaptive by task family:
plain-context-firstfor most tasksslicer-firstonly where architecture/routed multi-hop families benchmark betterdirect-file-check(rg) whenever must-have coverage is uncertain
Key rule: one mode does not fit all families. Mode routing is part of retrieval quality.
Latest Protocol Benchmark Snapshot
Current protocol performance target: 78.9% average token savings while preserving high accuracy.
From external 50-query mixed-layer benchmark results you provided:
- Stable plain-context baseline:
1501.3avg tokens78.6%savings100%accuracy100%full-hit
- Naive slicer-first:
1979.9avg tokens72.4%savings100%accuracy100%full-hit
- Adapted family-routed protocol:
1372.3avg tokens79.5%savings100%accuracy100%full-hit
Net: adapted protocol preserved full accuracy while reducing average tokens by ~129 vs stable baseline.
NPX One-Liner
After publishing to npm, users can set up any repo with one command:
npx -y @pmaddire/gcie@latest setup .This runs gcie setup . in the current repo by default.
If Python deps are missing, GCIE now bootstraps a local package venv and installs required runtime dependencies automatically on first run.
Optional setup flags are passed through:
npx -y @pmaddire/gcie@latest setup . --no-index
npx -y @pmaddire/gcie@latest setup . --forceFor command-only usage without setup:
npx -y @pmaddire/gcie@latest --helpAgent Integration
To make your coding agent use GCIE automatically, add this trigger line to your agent instructions (system prompt / repo instruction file):
Use GCIE for context lookup before reading files or making edits. Follow GCIE_USAGE.md.
Required file:
- keep
GCIE_USAGE.mdin the target repo root
Recommended setup:
- Run one-command setup:
npx -y @pmaddire/gcie@latest setup .
- Add the trigger line above to your agent instruction file.
- Start normal coding tasks; the agent should use GCIE-first retrieval workflow.
One-Command GitHub Bootstrap
Run this from the target repo to download GCIE from GitHub and set it up automatically:
powershell -ExecutionPolicy Bypass -Command "iwr https://raw.githubusercontent.com/pmaddire/GBCRSS/main/scripts/bootstrap_from_github.ps1 | iex"What it does:
- clones
https://github.com/pmaddire/GBCRSS.git - creates a temporary GCIE venv
- installs minimal deps
- runs
gcie setupagainst your current repo
In-Depth Setup
A) Use GCIE directly from this repo
- Create venv:
python -m venv .venv
- Install deps:
.venv\\Scripts\\python.exe -m pip install -r requirements.txt- If
requirements.txtis missing, install minimal deps:.venv\\Scripts\\python.exe -m pip install networkx GitPython typer
- Run the CLI:
.venv\\Scripts\\python.exe -m cli.app --help
B) Use GCIE from another repo via npm link
- In the GCIE repo:
npm link
- In your target repo:
npm link @pmaddire/gcie
- Verify:
gcie --help
C) Windows note
If PowerShell blocks the shim, use gcie.cmd instead of gcie.
NPM Wrapper
This repo includes a lightweight npm wrapper so you can run gcie like other npm CLIs.
- In GCIE repo:
npm link - In target repo:
gcie --help
Local option:
npm installthennpx @pmaddire/gcie@latest --help
The wrapper prefers .venv in the GCIE repo and falls back to system Python.
Performance Snapshot (AEO benchmark)
Two profiles observed after the update:
High-recall profile (recommended):
- Total GCIE tokens: 5,871
- No-tool baseline: 23,543
- Savings: 75.1%
- Coverage: 5/5 required files for all 3 tasks
Low-token profile (aggressive):
- Total GCIE tokens: 2,709
- No-tool baseline: 23,543
- Savings: 88.5%
- Coverage: incomplete (missed key files)
Per-task high-recall results:
- export_ui: 1,934 vs 5,481 (64.7% saved)
- blank_canvas: 2,322 vs 13,730 (83.1% saved)
- refine_patch: 1,615 vs 4,332 (62.7% saved)
Current Accuracy And Token Snapshot
Mixed-layer external repo finding
In a separate active repo with frontend/backend/build wiring, the newer
repo-local gcie context workflow performed much better when used with:
- file-first, symbol-heavy queries
--budget 1200for cross-layer tasksrgverification before edits
Observed savings there:
- Frontend/API task: about
89.5% - Theme/build task: about
91.9% - Backend/config task: about
78.2% - Average: about
86.5%
Important note:
--budget autowas too conservative for those cross-layer tasks--budget 1200consistently improved recall without needing broad manual reads1500added more noise without materially helping more than1200
Command Reference
Use gcie or gcie.cmd on Windows.
Setup / Lifecycle
gcie setup .gcie setup . --forcegcie setup . --no-indexgcie setup . --adapt --adapt-benchmark-size 25 --adapt-efficiency-iterations 8 --adapt-workers 6gcie remove .gcie remove . --remove-planninggcie remove . --keep-usage --keep-setup-doc
Index and Retrieval
gcie index .gcie context . "<task>" --intent edit --budget auto --mode adaptive --usage-policy hybridgcie context . "<task>" --intent debug --budget 1200 --mode adaptive --usage-policy forcegcie context . "<task>" --intent explore --budget auto --mode basic --usage-policy offgcie context-slices . "<task>" --intent edit --profile recallgcie context-slices . "<task>" --intent edit --profile low --pin frontend/src/App.jsx --pin-budget 300
Usage Policy
hybridis the default. It keeps the existing balance between recall and token cost.forcealways takes the richer GCIE retrieval path, even for simpler prompts.minimaloroffkeeps retrieval tiny when you already know the target files or only need a quick probe.
Adaptation and Profile State
gcie adapt . --benchmark-size 25 --efficiency-iterations 8 --adapt-workers 6gcie adapt . --benchmark-size 25 --efficiency-iterations 8 --adapt-workers 6 --clear-profilegcie adaptive-profile .gcie adaptive-profile . --clear- adaptation evaluates policy-aware candidates (
plain_minimal,plain,plain_force) plus chain/gapfill/rescue/slices and picks per-family under an accuracy gate
Utility Commands
gcie query <path> "<question>"gcie debug <path> "<question>"gcie cache-status .gcie cache-warm .gcie cache-clear .
Recommended Workflow
1) Bootstrap once per repo
gcie setup . --adapt --adapt-benchmark-size 25 --adapt-efficiency-iterations 8 --adapt-workers 62) Day-to-day retrieval
gcie context . "<task>" --intent edit --budget auto --mode adaptive --usage-policy hybridFor cross-layer flows, use file-first symbol-rich queries and optionally pin budget:
gcie context . "frontend/src/App.jsx selectedTheme /api/convert/start app.py start_convert" --intent edit --budget 1200 --mode adaptive --usage-policy force3) Verify before edits on critical changes
rg -n "<symbol1>|<symbol2>|<symbol3>" .4) Re-adapt only when needed
Use adaptation again after large refactors, architecture shifts, or repeated recall misses:
gcie adapt . --benchmark-size 25 --efficiency-iterations 8 --adapt-workers 6If adaptation quality drifts due stale profile state, reset first:
gcie adaptive-profile . --clear
gcie adapt . --benchmark-size 25 --efficiency-iterations 8 --adapt-workers 6 --clear-profileNotes
requested_benchmark_sizecan be higher thanbenchmark_sizeused when fewer unique repo-local benchmark cases are available.status: accuracy_locked_but_cost_riskycan appear when the selected 100%-accuracy policy is compared against a cheaper but lower-accuracy baseline.- Primary success criteria remain must-have coverage and pass rate; optimize cost after lock.
