vectorlint
v2.5.0
Published
An LLM-based prose linter that lets you enforce your style guide in one prompt
Maintainers
Readme
VectorLint: Prompt it, Lint it!

VectorLint is a command-line tool that evaluates and scores content using LLMs. It uses LLM-as-a-Judge to catch terminology, technical accuracy, and style issues that require contextual understanding.

Installation
Option 1: Global Installation
Install globally from npm:
npm install -g vectorlintVerify installation:
vectorlint --helpOption 2: Zero-Install with npx
Run VectorLint without installing:
npx vectorlint path/to/article.mdEnforce Your Style Guide
Define rules as Markdown files with YAML frontmatter to enforce your specific content standards:
- Check SEO Optimization - Verify content follows SEO best practices
- Detect AI-Generated Content - Identify artificial writing patterns
- Verify Technical Accuracy - Catch outdated or incorrect technical information
- Ensure Tone & Voice Consistency - Match content to appropriate tone for your audience
If you can write a prompt for it, you can lint it with VectorLint.
👉 Learn how to create custom rules →
Quality Scores
VectorLint scores your content using error density and a rubric-based system, enabling you to measure quality across documentation. This gives your team a shared understanding of which content needs attention and helps track improvements over time.
- Density-Based Scoring: For errors that can be counted, scores are calculated based on error density (errors per 100 words), making quality assessment fair across documents of any length.
- Rubric-Based Scoring: For more nuanced quality standards, like flow and completeness, scores are graded on a 1-4 rubric system and then normalized to a 1-10 scale.
How VectorLint Reduces False Positives
VectorLint uses a PAT (Pay A Tax) evaluation approach:
- Candidate generation: the model returns all potential violations with required gate-check fields (rule support, exact evidence, context support, plausible non-violation, and fix quality).
- Deterministic surfacing: VectorLint applies a strict filter and only surfaces violations that pass all required gates.
This means CLI output is intentionally stricter than raw model candidates, reducing noisy findings and improving precision.
The confidence gate is user-configurable via:
CONFIDENCE_THRESHOLD=0.75- Default:
0.75 - Lower values surface more findings (higher recall, more noise)
- Higher values surface fewer findings (higher precision, fewer false positives)
Quick Start
1. Zero-Config Mode (Fastest)
If you just want to check your content against a style guide:
vectorlint init --quickThis creates a VECTORLINT.md file where you can paste your style guide.
Note: You must set up your credentials in
~/.vectorlint/config.toml(see Step 3) before running checks.
Then run:
vectorlint doc.md2. Full Configuration
For a comprehensive setup (custom rule packs, specific targets), run:
vectorlint initThis creates:
- VectorLint Config (
.vectorlint.ini): Project-specific settings. - App Config (
~/.vectorlint/config.toml): LLM provider API keys.
👉 Full configuration reference →
3. Configure API Keys
Open your global App Config (~/.vectorlint/config.toml) and uncomment the section for your preferred LLM provider (OpenAI, Anthropic, Gemini, or Azure).
[env]
LLM_PROVIDER = "openai"
OPENAI_API_KEY = "sk-..."Note: You can also use a local
.envfile in your project, which takes precedence over the global config.
Run a check:
vectorlint doc.mdVectorLint is bundled with a VectorLint preset containing rules for AI pattern detection, directness, and more. The init command configures this automatically.
👉 Learn how to create custom rules →
Contributing
We welcome your contributions! Whether it's adding new rules, fixing bugs, or improving documentation, please check out our Contributing Guidelines to get started.
Resources
- Creating Custom Rules - Write your own quality checks in Markdown
- Configuration Guide - Complete reference for
.vectorlint.ini
