specvector
v0.13.9
Published
Context-aware AI code review using Model Context Protocol (MCP)
Maintainers
Readme
SpecVector
Context-aware code review. Powered by agents, not just diffs.
SpecVector is an open-source AI code reviewer that actively explores your codebase — reading related files, searching for patterns, checking Linear tickets, and understanding your architecture the way a senior engineer would.
Install
Add to any GitHub Actions workflow:
- uses: Not-Diamond/specvector@v0
with:
pr-number: ${{ github.event.pull_request.number || github.event.issue.number }}
openrouter-api-key: ${{ secrets.OPENROUTER_API_KEY }}Or scaffold the full workflow and config automatically:
bunx specvector initThis creates .github/workflows/specvector.yml and .specvector/config.yaml.
Add your API key
Go to Settings > Secrets and variables > Actions in your GitHub repo and add:
OPENROUTER_API_KEY— get one at openrouter.aiLINEAR_API_TOKEN— optional, for ticket context
Open a PR
SpecVector reviews automatically on every pull request.
Want a review on demand? Comment @specvector review on any PR.
Why SpecVector
Most AI review tools read the diff. SpecVector reads the architecture.
- Agentic exploration — Greps for usages, reads imports, follows references. Active investigation, not passive text matching.
- Requirements verification — Fetches your Linear ticket via MCP and checks if the code satisfies the acceptance criteria.
- Smart pipeline — Classifies files by risk (SKIP / FAST_PASS / DEEP_DIVE). Docs get a fast pass. Auth code gets a deep dive.
- Inline comments — Findings are posted on the exact diff lines, not buried in a wall of text.
- Incremental reviews — On push, only reviews what changed since the last review.
- Interactive — Dismiss findings with a thumbdown reaction. Trigger re-reviews with
@specvector review. - Flexible hosting — Use OpenRouter for cloud models or Ollama to keep LLM calls local — your code is never sent to a third-party AI provider.
How It Works
- Classify — Files are triaged by risk level (skip, fast pass, or deep dive)
- Explore — The agent reads related code, searches for patterns, checks tickets
- Review — Findings are posted as inline comments on the exact lines
CLI Usage
specvector init Scaffold workflow and config files
specvector review <pr-number> Review a pull request
specvector review <pr-number> --dry-run Preview review without posting
specvector review <pr-number> --mock Use mock review (no LLM)
specvector --help Show this help# Preview a review (no posting)
bunx specvector review 123 --dry-run
# Mock mode (no LLM calls)
bunx specvector review 123 --mock --dry-runConfiguration
.specvector/config.yaml:
provider: openrouter # default: openrouter | ollama
model: anthropic/claude-sonnet-4.5 # default: anthropic/claude-sonnet-4.5
strictness: normal # default: normal | strict | lenient
skipBots: true # default: true — skip dependabot/renovate PRs
maxPrSize: 500 # default: 500 — warn above N lines (0 = off)
selfReviewCheck: true # default: true — warn when no human reviewerAll settings can be overridden via environment variables: SPECVECTOR_PROVIDER, SPECVECTOR_MODEL, SPECVECTOR_STRICTNESS, etc.
LLM Providers
| Provider | Use Case | Setup |
| -------------- | -------------------------------------------- | -------------------- |
| OpenRouter | Cloud — Claude, GPT, Llama, Gemini, and more | OPENROUTER_API_KEY |
| Ollama | Local — no code sent to third-party AI | ollama serve |
# Use a local model
SPECVECTOR_PROVIDER=ollama SPECVECTOR_MODEL=llama3.2 bunx specvector review 123 --dry-runDevelopment
git clone https://github.com/Not-Diamond/specvector.git
cd specvector && bun install
bun test # run tests
bun run check # type checkLicense
MIT
Contributing
PRs welcome. Run bun test before submitting.
