@finishit/finishit
v0.2.0
Published
Finally finish the project you started six months ago. One command, one PR.
Readme
FinishIt
Finally finish the project you started six months ago.
npx @finishit/finishit github.com/you/that-repo-from-2024Website: nexus-prime.cfd/finishit
Point it at any abandoned repo. An AI agent indexes the codebase, identifies what you were stuck on, and ships a PR.
How it works
- Indexes your repo — git history, file tree, README, open issues,
every wip: commit and TODO comment land in a local SQLite at
~/.finishit/repos/<hash>/memory.db. - Diagnoses the stuck point. The first thing it tells you is what you were actually working on, quoting the real wip commit and the real TODO. No hallucinated paths — a guard rejects any output that names a file or hash not in the indexed facts.
- Ships a PR in a git worktree on a
finishit/<slug>-<ts>branch. Tests run if a test command is detected. Default branch is never touched. One PR per command. No autonomous loops.
Setup
# Option 1: use a CLI you already have (zero config — recommended)
brew install codex # uses your ChatGPT subscription
# or: npm i -g @anthropic-ai/claude-code # uses your Claude subscription
# Option 2: BYO API key
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
# Option 3: local Ollama
ollama serveProvider auto-detection order (first match wins):
claudeCLI on PATH (--model claude-codeto force)codexCLI on PATH (--model codexto force)ANTHROPIC_API_KEYOPENAI_API_KEY- A reachable Ollama server at
OLLAMA_BASE_URLorhttp://localhost:11434
Codex CLI (uses your ChatGPT subscription)
brew install codex
npx @finishit/finishit .
# or force it: npx @finishit/finishit . --model codexSet FINISHIT_CODEX_MODEL to override the model sent to codex (default: gpt-5).
Claude Code CLI (uses your Claude subscription)
npm i -g @anthropic-ai/claude-code
npx @finishit/finishit .
# or force it: npx @finishit/finishit . --model claude-codeOllama, no key
ollama serve
ollama pull qwen2.5-coder:14b
npx @finishit/finishit . --model qwen2.5-coder:14bUse OLLAMA_BASE_URL when Ollama is not on the default port:
export OLLAMA_BASE_URL=http://localhost:11434
npx @finishit/finishit . --model qwen2.5-coder:14bAnthropic
export ANTHROPIC_API_KEY=sk-ant-...
npx @finishit/finishit . --model claude-sonnet-4-6OpenAI
export OPENAI_API_KEY=sk-...
npx @finishit/finishit . --model gpt-5Passing --model forces routing: codex → Codex CLI, claude-code →
Claude Code CLI, claude-* → Anthropic API, gpt-* and supported
o-series prefixes → OpenAI API, anything else → Ollama. No account,
no signup, no telemetry.
What it does NOT do
- No web UI, no dashboard, no IDE plugin.
- No team features, no shared memory, no Slack/Linear integration.
- No code leaves your machine except to your configured LLM provider.
- No autonomous loops or scheduled runs.
Built on Nexus Prime
FinishIt is the smallest possible surface of the Nexus Prime engine — Memory Fabric, Ghost Pass, Session DNA, and worktree-isolated execution. If you want this for your team's monorepo with shared memory across every agent your team uses, look at nexus-prime.cfd/teams.
MIT.
