geo-ai-search-optimization
v2.8.2
Published
Install and run a Generative Engine Optimization (GEO)-first, SEO-supported Codex skill for website optimization.
Maintainers
Readme
geo-ai-search-optimization
The most comprehensive open-source CLI toolkit for Generative Engine Optimization (GEO). Optimize your website for AI-powered search engines — ChatGPT, Perplexity, Gemini, Google AI Overviews, and Bing Copilot.
Zero dependencies. 55+ commands. 18+ analysis dimensions. Full TypeScript support. REST API included.
Install
npm install -g geo-ai-search-optimizationOr run without installing:
npx geo-ai-search-optimization diagnose https://example.comQuick Start
# Smart diagnosis — auto-detects URL, directory, or file
geo-ai-search-optimization diagnose https://example.com
# Full multi-dimension page audit
geo-ai-search-optimization full-page-audit https://example.com/blog/post
# Generate ready-to-use fix code
geo-ai-search-optimization auto-fix https://example.com/blog/post
# Compare two pages
geo-ai-search-optimization compare https://yoursite.com https://competitor.com
# Interactive HTML dashboard
geo-ai-search-optimization dashboard https://example.com --out report.html
# Start REST API server
geo-ai-search-optimization api-server --port 3456What It Does
GEO is Generative Engine Optimization — the practice of making your content understandable, extractable, and citable by AI search engines.
This tool analyzes your pages across 18+ dimensions and tells you exactly what to fix:
| Dimension | What It Checks | |-----------|---------------| | Base Audit | Title, meta description, canonical, JSON-LD, author signals, Q&A headings | | Citability | Claim density, entity density, quotable sentences, content structure | | E-E-A-T | Experience, Expertise, Authoritativeness, Trustworthiness signals | | Readability | Flesch-Kincaid grade, sentence length, passive voice, reading time | | Heading Structure | H1-H6 hierarchy, semantic coverage, question headings | | Internal Links | Link count, anchor text quality, path depth | | Link Quality | External link health, broken links, domain diversity, rel attributes | | Social Meta | Open Graph (10 tags), Twitter Card (7 tags) | | Platform Readiness | ChatGPT, Perplexity, Gemini, Google AI Overviews, Bing Copilot | | Schema Validation | JSON-LD correctness, required fields, AI discoverability enhancements | | Content Freshness | Date extraction, temporal signals, stale references, evergreen scoring | | Security | HTTPS, security headers, viewport, robots meta, mixed content | | Topic Coverage | TF-IDF keywords, bigrams, topic clusters | | Image Audit | Alt text quality, lazy loading, srcset, og:image, ImageObject schema | | Backlink Profile | Authority signals, expert authorship, original research, link-worthiness | | AI Snippet Readiness | Predicted AI citations, quotable passages, snippet simulation | | Multi-Language | Language detection, hreflang validation, content-language consistency | | Canonical | Canonical URL issues, signal conflicts, redirect consistency | | Accessibility | Semantic HTML, heading order, alt text, ARIA landmarks, form labels, skip links | | Performance | Page weight, render-blocking resources, lazy loading, resource hints |
Commands
Smart Entry Points
diagnose <url|dir|file> # Auto-detect input, run right analysis
summary <url|dir|file> # One-line score summary
compare <page-A> <page-B> # Multi-dimension side-by-side comparison
auto-fix <url|file> # Generate fix code (meta, JSON-LD, robots.txt, llms.txt)
explain <url|file> # Explain scores with evidencePage-Level Analysis
page-audit <url|file> # Basic GEO page audit
full-page-audit <url|file> # Full multi-dimension audit (with --save for snapshots)
batch-page-audit <urls...> # Batch basic audits
batch-full-page-audit <urls...> # Batch multi-dimension auditsProject-Level Analysis
scan <dir> # Signal scanning
audit <dir> # Project audit with scoring
full-audit <dir> # Project + infra + optional page sampling
bulk-optimize <url|urls-file> # Site-wide audit with prioritized fix listSpecialized Analysis (20+ commands)
crawlers <url|file> # AI crawler access (GPTBot, ClaudeBot, etc.)
citability <url|file> # Static citability scoring
eeat <url|file> # E-E-A-T signal analysis
readability <url|file> # Flesch-Kincaid, reading ease
heading-structure <url|file> # H1-H6 hierarchy analysis
internal-links <url|file> # Internal link analysis
link-quality <url|file> # External link quality audit
social-meta <url|file> # OG + Twitter tags
platform-ready <url|file> # Multi-platform readiness
validate-schema <url|file> # JSON-LD validation
validate-llms <url|file> # llms.txt validation
sitemap <url|file> # XML sitemap analysis
security <url|file> # Security headers + checks
freshness <url|file> # Content age analysis
content-freshness <url|file> # Deep freshness with temporal signals
topics <url|file> # Keyword + topic extraction
image-audit <url|file> # Image SEO/GEO audit
backlink-profile <url|file> # Authority signal analysis
ai-snippet <url|file> # AI snippet simulation
canonical <url|file> # Canonical URL diagnostics
multi-lang <url|file> # Multi-language / hreflang audit
accessibility <url|file> # Accessibility audit (12 checks)
performance <url|file> # Performance hints for AI crawlersCompetitive Analysis
benchmark <url> --competitors <urls> # Signal matrix comparison
deep-benchmark <url> --competitors <urls> # Deep multi-dimension benchmark
content-gap <url> --competitors <urls> # Content gap analysis
keyword-gap <url> --competitors <urls> # Keyword coverage gaps
track-competitors <url> --competitors <urls> # Persistent competitor tracking
watch-competitors <url> --competitors <urls> # Single-pass competitor watchContent Generation & Optimization
auto-fix <url|file> # Generate meta tags, JSON-LD, robots.txt, llms.txt
generate-llms <url|file> # Smart llms.txt generation from site content
optimize-llms <url|file> # Optimize existing llms.txt
generate-schema <url|file> # Auto-generate JSON-LD schemas
generate-sitemap <url|dir> # Generate XML sitemap
rewrite-content <url|file> # Content improvement suggestionsTracking & Monitoring
full-page-audit <url> --save # Save page snapshot
page-trend <url> # View page score history
trend # View project score trend
monitor <url> # Detect score drops
score-history <url> # View score history with sparklines
record-score <url> --score <n> # Record a score entry
alert-check <url> # Evaluate alert rules
cache-stats # View cache statistics
cache-clear # Clear cached results
ci <dir> --min-score 60 # CI/CD gate
watch <dir> # Auto-audit on file changes
init-hook # Git pre-commit hookCitation Tracking
citation-check <url> --queries <q1,q2> # Check if cited in AI search
citation-monitor <url> --queries-file <file> # Monitor citation rateReports & Output
report <input> # Unified report (markdown/html/json)
pdf-report <audit.json> # PDF-ready HTML report
dashboard <url|file> # Interactive HTML dashboard
slack-report <audit.json> # Slack/Discord webhook report
html-pack <input> --out-dir <dir> # Static HTML pages
export-pack <input> --out-dir <dir> # Multi-format export
publish-pack <input> --out-dir <dir> # Full deliverable packageREST API Server
api-server [--port 3456] [--host 127.0.0.1]CLI caching: URL-based analysis results are cached for 5 minutes. Use --no-cache to bypass.
API caching: All API responses are cached for 10 minutes by default. Add ?no-cache=1 to bypass. Use /api/cache-stats and POST /api/cache-clear to manage.
Exposes all analysis functions as HTTP endpoints:
GET /api/health
GET /api/full-audit?url=<url>
GET /api/citability?url=<url>
GET /api/eeat?url=<url>
GET /api/crawlers?url=<url>
GET /api/link-quality?url=<url>
GET /api/content-freshness?url=<url>
GET /api/ai-snippet?url=<url>
GET /api/structured-data?url=<url>
GET /api/backlink-profile?url=<url>
GET /api/accessibility?url=<url>
GET /api/performance?url=<url>
GET /api/readability?url=<url>
GET /api/heading-structure?url=<url>
GET /api/internal-links?url=<url>
GET /api/social-meta?url=<url>
GET /api/security?url=<url>
GET /api/topics?url=<url>
GET /api/validate-schema?url=<url>
GET /api/platform-ready?url=<url>
GET /api/image-audit?url=<url>
GET /api/cache-stats
POST /api/cache-clear
GET /api/endpointsInit & Config
init-llms [dir] # Generate llms.txt template
init-schema <type> [dir] # Generate JSON-LD template
init-config [dir] # Generate .georc.json
init-hook [dir] # Generate pre-commit hook
doctor # Check installation healthProgrammatic API
import {
diagnose,
fullPageAudit,
comparePages,
generateAutoFix,
analyzeCrawlers,
analyzeCitability,
analyzeEeat,
analyzeLinkQuality,
analyzeContentFreshness,
simulateAiSnippet,
generateStructuredData,
analyzeBacklinkProfile,
trackCompetitors,
bulkOptimize,
generateLlmsTxt,
optimizeLlmsTxt,
startApiServer,
formatSlackReport,
generateDashboard,
evaluateAlertRules,
getScoreHistory,
auditAccessibility,
analyzePerformance,
createCache
} from 'geo-ai-search-optimization';
// Smart diagnosis
const result = await diagnose('https://example.com');
console.log(result.score, result.quickWins);
// Full multi-dimension audit
const audit = await fullPageAudit('https://example.com/blog/post');
console.log(audit.compositeScore, audit.dimensions);
// AI snippet simulation
const snippet = await simulateAiSnippet('https://example.com/blog/post');
console.log(snippet.simulatedSnippets.chatgpt);
// Competitor tracking with alerts
const tracking = await trackCompetitors(
'https://yoursite.com',
['https://competitor1.com', 'https://competitor2.com']
);
console.log(tracking.ownRank, tracking.alerts);
// Start REST API
const api = await startApiServer({ port: 3456 });
console.log(`API running at ${api.url}`);
// Generate HTML dashboard
const dashboard = await generateDashboard('https://example.com', {
outputPath: 'report.html',
theme: 'dark'
});Full TypeScript declarations included (index.d.ts) — 300+ exports with IDE autocomplete.
GitHub Action
- uses: redredchen01/[email protected]
with:
project-path: ./your-project
min-score: 60
fail-on-regression: true
save-snapshot: trueConfiguration
Create .georc.json in your project root:
geo-ai-search-optimization init-config --site-name "My Site" --site-url "https://example.com"{
"site": { "name": "My Site", "url": "https://example.com" },
"audit": { "minScore": 40, "maxFileSize": 1000000 },
"ci": { "minScore": 60, "failOnRegression": false },
"crawlers": { "strategy": "open" },
"plugins": []
}Plugin System
Extend with custom signals, checks, and commands:
// my-plugin.js
export function register(api) {
api.addSignal('custom-signal', {
pattern: /my-custom-pattern/i,
label: 'Custom Signal',
weight: 5
});
}Load via config: "plugins": ["./my-plugin.js"]
Alert Rules
Configure automated alerts in a JSON file:
[
{ "name": "Low Score", "condition": "score-below", "threshold": 40, "action": "webhook", "actionConfig": { "url": "https://hooks.slack.com/..." } },
{ "name": "Score Drop", "condition": "score-drop", "threshold": 10, "action": "console" },
{ "name": "Weak Citability", "condition": "dimension-below", "threshold": 30, "dimension": "citability", "action": "file", "actionConfig": { "filePath": "alerts.log" } }
]geo-ai-search-optimization alert-check https://example.com --rules-file rules.jsonWhy GEO?
- 35%+ of search queries are now handled by AI assistants (2026)
- SaaS GEO tools cost $100-$3000/month — this is free and open-source
- No other npm package provides comprehensive GEO analysis with zero dependencies
- CI/CD native — fail your build if GEO score drops
- REST API included — integrate with any tool or pipeline
- Agent-friendly — 25+ bundled Codex skills for AI coding assistants
Numbers
| Metric | Value | |--------|-------| | CLI Commands | 50+ | | API Exports | 300+ | | Analysis Dimensions | 15+ | | Test Cases | 466 | | External Dependencies | 0 | | TypeScript Support | Full declarations | | GitHub Action | Included | | REST API | Included | | Plugin System | Extensible |
License
MIT
