npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

bluera-knowledge

v0.11.16

Published

CLI tool for managing knowledge stores with semantic search

Readme

🧠 Bluera Knowledge

CI NPM Version NPM Downloads License Node Python

🚀 Build a local knowledge base for your AI coding agent—dependency source code, crawled docs, and your own files, all instantly searchable.

Two ways to use it:

| | npm Package | Claude Code Plugin | |--|-------------|-------------------| | Install | npm install -g bluera-knowledge | /plugin install bluera-knowledge@bluera | | Interface | CLI commands | Slash commands + MCP tools | | Works with | Any AI tool, any editor | Claude Code specifically | | Best for | CI/CD, automation, other editors | Native Claude Code integration |

Both provide the same core functionality: index repos, crawl docs, semantic search.

Bluera Knowledge gives AI coding agents instant local access to authoritative context:

  • Dependency source code — Clone and search the repos of dependencies you actually use
  • Documentation — Crawl, index, and search any docs site
  • Your files — Index and search local folders for project-specific knowledge

All searchable in milliseconds, no rate limits, fully offline.

📑 Table of Contents


📦 Installation

npm Package (CLI)

# Global install (CLI available everywhere)
npm install -g bluera-knowledge

# Or project install
npm install --save-dev bluera-knowledge

Works with any AI coding tool, editor, CI/CD pipeline, or automation.

Claude Code Plugin

# Add the Bluera marketplace (one-time setup)
/plugin marketplace add blueraai/bluera-marketplace

# Install the plugin (or use /plugin to browse the UI)
/plugin install bluera-knowledge@bluera

Adds slash commands, MCP tools, and Skills for optimal Claude Code integration.

[!NOTE] First launch may appear to hang while the plugin installs Python dependencies (crawl4ai). This is normal—subsequent launches are instant.


✨ Why Bluera Knowledge?

When your AI coding assistant needs to answer "how do I handle errors in Express middleware?", it can:

  1. Guess from training data — might be outdated or wrong
  2. Search the web — slow, rate-limited, often returns blog posts instead of source
  3. Read your local knowledge base — authoritative, complete, instant ✅

Bluera Knowledge enables option 3 by building a searchable knowledge base from three types of sources:

| Source Type | What It Does | Example | |------------|--------------|---------| | 📦 Dependency Source Code | Clone & search library repos you actually use | Express, React, Lodash | | 🌐 Documentation Sites | Crawl & index any docs site | Next.js docs, FastAPI guides | | 📁 Local Files | Index project-specific content | Your docs, standards, API specs |

The result: Your AI agent has local, instant access to authoritative information with zero rate limits:

| Capability | Without | With Bluera Knowledge | |------------|---------|----------------------| | Response time | 2-5 seconds (web) | ~100ms (local) | | Accuracy | Uncertain | Authoritative (source code + docs) | | Completeness | Partial docs | Full implementation + tests + your content | | Rate limits | Yes | None |


🎯 When Claude Code Should Query BK

The simple rule: Query BK for any question about libraries, dependencies, or reference material.

BK is cheap (~100ms, no rate limits), authoritative (actual source code), and complete (includes tests and internal APIs). Claude Code should query it frequently for external code questions.

Always Query BK For:

| Question Type | Examples | |--------------|----------| | Library internals | "How does Express handle middleware errors?", "What does useEffect cleanup do?" | | API signatures | "What parameters does axios.create() accept?", "What options can I pass to Hono?" | | Error handling | "What errors can Zod throw?", "Why might this library return undefined?" | | Version behavior | "What changed in React 18?", "Is this method deprecated?" | | Configuration | "What config options exist for Vite?", "What are the defaults?" | | Testing patterns | "How do the library authors test this?", "How should I mock this?" | | Performance/internals | "Is this cached internally?", "What's the complexity?" | | Security | "How does this library validate input?", "Is this safe against injection?" | | Integration | "How do I integrate X with Y?", "What's the idiomatic way to use this?" |

DO NOT Query BK For:

| Question Type | Use Instead | |--------------|-------------| | Your project code | Grep/Read directly ("Where is OUR auth middleware?") | | General concepts | Training data ("What is a closure?") | | Breaking news | Web search ("Latest React release notes") |

Quick Pattern Matching:

"How does [library] work..."           → Query BK
"What does [library function] do..."   → Query BK
"What options does [library] accept..."→ Query BK
"What errors can [library] throw..."   → Query BK
"Where is [thing] in OUR code..."      → Grep/Read directly
"What is [general concept]..."         → Training data

💰 Token Efficiency

Beyond speed and accuracy, Bluera Knowledge can significantly reduce token consumption for code-related queries—typically saving 60-75% compared to web search approaches.

📊 How It Works

Without Bluera Knowledge:

  • Web searches return 5-10 results (~500-2,000 tokens each)
  • Total per search: 3,000-10,000 tokens
  • Often need multiple searches to find the right answer
  • Lower signal-to-noise ratio (blog posts mixed with actual docs)

With Bluera Knowledge:

  • Semantic search returns top 10 relevant code chunks (~200-400 tokens each)
  • Structured metadata (file paths, imports, purpose)
  • Total per search: 1,500-3,000 tokens
  • Higher relevance due to vector search (fewer follow-up queries needed)

🎯 Real-World Examples

Example 1: Library Implementation Question

Question: "How does Express handle middleware errors?"

| Approach | Token Cost | Result | |----------|-----------|--------| | Web Search | ~8,000 tokens(3 searches: general query → refined query → source code) | Blog posts + Stack Overflow + eventual guess | | Bluera Knowledge | ~2,000 tokens(1 semantic search) | Actual Express source code, authoritative | | Savings | 75% fewer tokens ✅ | Higher accuracy |

Example 2: Dependency Exploration

Question: "How does LanceDB's vector search work?"

| Approach | Token Cost | Result | |----------|-----------|--------| | Web Search | ~9,500 tokens(General docs → API docs → fetch specific page) | Documentation, might miss implementation details | | Bluera Knowledge | ~1,500 tokens(Search returns source + tests + examples) | Source code from Python + Rust implementation | | Savings | 84% fewer tokens ✅ | Complete picture |

Example 3: Version-Specific Behavior

Question: "What changed in React 18's useEffect cleanup?"

| Approach | Token Cost | Result | |----------|-----------|--------| | Training Data | 0 tokens (but might be outdated) | Uncertain if accurate for React 18 | | Web Search | ~5,000 tokens(Search changelog → blog posts → docs) | Mix of React 17 and 18 info | | Bluera Knowledge | ~2,000 tokens(Search indexed React 18 source) | Exact React 18 implementation | | Savings | 60% fewer tokens ✅ | Version-accurate |

⚖️ When BK Uses More Tokens

Bluera Knowledge isn't always the most token-efficient choice:

| Scenario | Best Approach | Why | |----------|---------------|-----| | Simple concept questions("What is a JavaScript closure?") | Training data | Claude already knows this (0 tokens) | | Current events("Latest Next.js 15 release notes") | Web search | BK only has what you've indexed | | General advice("How to structure a React app?") | Training data | Opinion-based, not code-specific |

📈 Summary: Token Savings by Query Type

| Query Type | Typical Token Savings | When to Use BK | |------------|----------------------|----------------| | Library internals | 60-75% | ✅ Always | | Version-specific behavior | 50-70% | ✅ Always | | "How does X work internally?" | 70-85% | ✅ Always | | API usage examples | 40-60% | ✅ Recommended | | General concepts | -100% (uses more) | ❌ Skip BK | | Current events | -100% (uses more) | ❌ Skip BK |

💡 Best Practice

Default to BK for library questions. It's cheap, fast, and authoritative:

| Question Type | Action | Why | |--------------|--------|-----| | Library internals, APIs, errors, versions, config | Query BK first | Source code is definitive, 60-85% token savings | | General programming concepts | Skip BK | Training data is sufficient | | Breaking news, release notes | Web search | BK only has indexed content |

The plugin's Skills teach Claude Code these patterns automatically. When in doubt about a dependency, query BK—it's faster and more accurate than guessing or web searching.


🚀 Quick Start

Using Claude Code Plugin

  • [ ] 📦 Add a library: /bluera-knowledge:add-repo https://github.com/lodash/lodash
  • [ ] 📁 Index your docs: /bluera-knowledge:add-folder ./docs --name=project-docs
  • [ ] 🔍 Test search: /bluera-knowledge:search "deep clone object"
  • [ ] 📋 View stores: /bluera-knowledge:stores

[!TIP] Not sure which libraries to index? Use /bluera-knowledge:suggest to analyze your project's dependencies.

Using CLI (npm package)

# Add a library
bluera-knowledge store create lodash --type repo --source https://github.com/lodash/lodash

# Index your docs
bluera-knowledge store create project-docs --type file --source ./docs

# Test search
bluera-knowledge search "deep clone object"

# View stores
bluera-knowledge store list

✨ Features

🎯 Core Features

  • 🔬 Smart Dependency Analysis - Automatically scans your project to identify which libraries are most heavily used by counting import statements across all source files
  • 📊 Usage-Based Suggestions - Ranks dependencies by actual usage frequency, showing you the top 5 most-imported packages with import counts and file counts
  • 🔍 Automatic Repository Discovery - Queries package registries (NPM, PyPI, crates.io, Go modules) to automatically find GitHub repository URLs
  • 📦 Git Repository Indexing - Clones and indexes dependency source code for both semantic search and direct file access
  • 📁 Local Folder Indexing - Indexes any local content - documentation, standards, reference materials, or custom content
  • 🌐 Web Crawling - Crawl and index web pages using crawl4ai - convert documentation sites to searchable markdown

🔍 Search Modes

  • 🧠 Vector Search - AI-powered semantic search with relevance ranking
  • 📂 File Access - Direct Grep/Glob operations on cloned source files

🗺️ Code Graph Analysis

  • 📊 Code Graph Analysis - During indexing, builds a graph of code relationships (calls, imports, extends) to provide usage context in search results - shows how many callers/callees each function has
  • 🌐 Multi-Language Support - Full AST parsing for JavaScript, TypeScript, Python, Rust, and Go; indexes code in any language
  • 🔌 MCP Integration - Exposes all functionality as Model Context Protocol tools for AI coding agents

🌍 Language-Specific Features

While bluera-knowledge indexes and searches code in any language, certain advanced features are language-specific:

| Language | Code Graph | Call Analysis | Import Tracking | Method Tracking | |----------|------------|---------------|-----------------|-----------------| | TypeScript/JavaScript | ✅ Full Support | ✅ Functions & Methods | ✅ Full | ✅ Class Methods | | Python | ✅ Full Support | ✅ Functions & Methods | ✅ Full | ✅ Class Methods | | Rust | ✅ Full Support | ✅ Functions & Methods | ✅ Full | ✅ Struct/Trait Methods | | Go | ✅ Full Support | ✅ Functions & Methods | ✅ Full | ✅ Struct/Interface Methods | | ZIL | ✅ Full Support | ✅ Routines | ✅ INSERT-FILE | ✅ Objects/Rooms | | Other Languages | ⚠️ Basic Support | ❌ | ❌ | ❌ |

[!NOTE] Code graph features enhance search results by showing usage context (e.g., "this function is called by 15 other functions"), but all languages benefit from vector search and full-text search capabilities.

🔌 Custom Language Support

Bluera Knowledge provides an extensible adapter system for adding full graph support to any language. The built-in ZIL adapter (for Infocom/Zork-era source code) demonstrates this capability.

What adapters provide:

  • Smart chunking - Split files by language constructs (functions, classes, objects)
  • Symbol extraction - Parse definitions with signatures and line numbers
  • Import tracking - Resolve include/import relationships
  • Call graph analysis - Track function calls with special form filtering

Built-in adapters: | Language | Extensions | Symbols | Imports | |----------|------------|---------|---------| | ZIL | .zil, .mud | ROUTINE, OBJECT, ROOM, GLOBAL, CONSTANT | INSERT-FILE |

Example - ZIL indexing:

# Index a Zork source repository
bluera-knowledge store create zork1 --type repo --source https://github.com/historicalsource/zork1

# Search for routines
bluera-knowledge search "V-LOOK routine" --stores zork1

🎯 How It Works

The plugin provides AI agents with four complementary search capabilities:

🔍 1. Semantic Vector Search

AI-powered search across all indexed content

  • Searches by meaning and intent, not just keywords
  • Uses embeddings to find conceptually similar content
  • Ideal for discovering patterns and related concepts

📝 2. Full-Text Search (FTS)

Fast keyword and pattern matching

  • Traditional text search with exact matching
  • Supports regex patterns and boolean operators
  • Best for finding specific terms or identifiers

⚡ 3. Hybrid Mode (Recommended)

Combines vector and FTS search

  • Merges results from both search modes with weighted ranking
  • Balances semantic understanding with exact matching
  • Provides best overall results for most queries

📂 4. Direct File Access

Traditional file operations on cloned sources

  • Provides file paths to cloned repositories
  • Enables Grep, Glob, and Read operations on source files
  • Supports precise pattern matching and code navigation
  • Full access to complete file trees

When you use /bluera-knowledge: commands, here's what happens:

  1. You issue a command - Type /bluera-knowledge:stores or similar in Claude Code
  2. Claude Code receives instructions - The command provides step-by-step instructions for Claude Code
  3. Claude Code executes MCP tools - Behind the scenes, Claude Code uses mcp__bluera-knowledge__* tools
  4. Results are formatted - Claude Code formats and displays the output directly to you

Example Flow:

You: /bluera-knowledge:stores
  ↓
Command file instructs Claude Code to use execute("stores")
  ↓
MCP tool queries LanceDB for store metadata
  ↓
Claude Code formats results as a table
  ↓
You see: Beautiful table of all your knowledge stores

This architecture means commands provide a clean user interface while MCP tools handle the backend operations.


🎨 User Interface

👤 User Commands

You manage knowledge stores through /bluera-knowledge: commands:

  • 🔬 Analyze your project to find important dependencies
  • 📦 Add Git repositories (dependency source code)
  • 📁 Add local folders (documentation, standards, etc.)
  • 🌐 Crawl web pages and documentation
  • 🔍 Search across all indexed content
  • 🔄 Manage and re-index stores

🤖 MCP Tools

AI agents access knowledge through Model Context Protocol (3 tools for minimal context overhead):

| Tool | Purpose | |------|---------| | search | 🔍 Semantic vector search across all stores | | get_full_context | 📖 Retrieve complete code context for a search result | | execute | ⚡ Meta-tool for store/job management commands |

The execute tool consolidates store and job management into a single tool with subcommands:

  • Store commands: stores, store:info, store:create, store:index, store:delete
  • Job commands: jobs, job:status, job:cancel
  • Help: help, commands

⚙️ Background Jobs

[!TIP] Long-running operations (git clone, indexing) run in the background, allowing you to continue working while they complete.

🔄 How It Works

When you add a repository or index content:

  1. ⚡ Instant Response - Operation starts immediately and returns a job ID
  2. 🔄 Background Processing - Indexing runs in a separate process
  3. 📊 Progress Updates - Check status anytime with /bluera-knowledge:check-status
  4. 🔔 Auto-Notifications - Active jobs appear automatically in context

📝 Example Workflow

# Add a large repository (returns immediately with job ID)
/bluera-knowledge:add-repo https://github.com/facebook/react

# Output:
# ✓ Created store: react (a1b2c3d4...)
# 🔄 Indexing started in background
#    Job ID: job_abc123def456
#
# Check status with: /bluera-knowledge:check-status job_abc123def456

# Check progress anytime
/bluera-knowledge:check-status job_abc123def456

# Output:
# Job Status: job_abc123def456
# Status:   running
# Progress: ███████████░░░░░░░░░ 45%
# Message:  Indexed 562/1,247 files

# View all active jobs
/bluera-knowledge:check-status

# Cancel if needed
/bluera-knowledge:cancel job_abc123def456

🚀 Performance

Background jobs include significant performance optimizations:

  • ⚡ Parallel Embedding - Processes 32 chunks simultaneously (~30x faster than sequential)
  • 🔓 Non-Blocking - Continue working while indexing completes
  • 📊 Progress Tracking - Real-time updates on files processed and progress percentage
  • 🧹 Auto-Cleanup - Completed jobs are cleaned up after 24 hours

📖 Quick Reference

| Command | Purpose | Arguments | |---------|---------|-----------| | 🔬 /bluera-knowledge:suggest | Analyze project dependencies | None | | 📦 /bluera-knowledge:add-repo | Clone and index Git repository | <url> [--name=<name>] [--branch=<branch>] | | 📁 /bluera-knowledge:add-folder | Index local folder | <path> --name=<name> | | 🔍 /bluera-knowledge:search | Search knowledge stores | "<query>" [--stores=<names>] [--limit=<N>] | | 📋 /bluera-knowledge:stores | List all stores | None | | 🔄 /bluera-knowledge:index | Re-index a store | <store-name-or-id> | | 🗑️ /bluera-knowledge:remove-store | Delete a store and all data | <store-name-or-id> | | 🌐 /bluera-knowledge:crawl | Crawl web pages | <url> <store-name> [--crawl "<instruction>"] | | 🔁 /bluera-knowledge:sync | Sync stores from definitions config | [--dry-run] [--prune] |


💻 Commands

🔬 /bluera-knowledge:suggest

Analyze your project to suggest libraries worth indexing as knowledge stores

/bluera-knowledge:suggest

Scans source files, counts import statements, and suggests the top 5 most-used dependencies with their repository URLs.

Supported languages: | Language | Manifest File | Registry | |----------|---------------|----------| | JavaScript/TypeScript | package.json | NPM | | Python | requirements.txt, pyproject.toml | PyPI | | Rust | Cargo.toml | crates.io | | Go | go.mod | Go modules |

## Dependency Analysis

Scanned 342 source files and found 24 dependencies.

### Top Dependencies by Usage

1. **react** (156 imports across 87 files)
   Repository: https://github.com/facebook/react

   Add with:
   /bluera-knowledge:add-repo https://github.com/facebook/react --name=react

2. **vitest** (40 imports across 40 files)
   Repository: https://github.com/vitest-dev/vitest

   Add with:
   /bluera-knowledge:add-repo https://github.com/vitest-dev/vitest --name=vitest

3. **lodash** (28 imports across 15 files)
   Repository: https://github.com/lodash/lodash

   Add with:
   /bluera-knowledge:add-repo https://github.com/lodash/lodash --name=lodash

---

Already indexed: typescript, express

📦 /bluera-knowledge:add-repo

Clone and index a Git repository

/bluera-knowledge:add-repo <url> [--name=<name>] [--branch=<branch>]

Examples:

/bluera-knowledge:add-repo https://github.com/lodash/lodash
/bluera-knowledge:add-repo https://github.com/facebook/react --branch=main --name=react
✓ Cloning https://github.com/facebook/react...
✓ Created store: react (a1b2c3d4...)
  Location: ~/.local/share/bluera-knowledge/stores/a1b2c3d4.../

✓ Indexing...
✓ Indexed 1,247 files

Store is ready for searching!

📁 /bluera-knowledge:add-folder

Index a local folder

/bluera-knowledge:add-folder <path> --name=<name>

📚 Use cases:

  • 📖 Project documentation
  • 📏 Coding standards
  • 🎨 Design documents
  • 🔌 API specifications
  • 📚 Reference materials
  • 📄 Any other content

Examples:

/bluera-knowledge:add-folder ./docs --name=project-docs
/bluera-knowledge:add-folder ./architecture --name=design-docs
✓ Adding folder: ~/my-project/docs...
✓ Created store: project-docs (e5f6g7h8...)
  Location: ~/.local/share/bluera-knowledge/stores/e5f6g7h8.../

✓ Indexing...
✓ Indexed 342 files

Store is ready for searching!

🔍 /bluera-knowledge:search

Search across indexed knowledge stores

/bluera-knowledge:search "<query>" [--stores=<names>] [--limit=<number>] [--min-relevance=<0-1>]

Options:

  • --stores=<names> - Comma-separated store names to search (default: all stores)
  • --limit=<number> - Maximum results to return (default: 10)
  • --min-relevance=<0-1> - Minimum raw cosine similarity; returns empty if no results meet threshold
  • --threshold=<0-1> - Minimum normalized score to include results
  • --mode=<mode> - Search mode: hybrid (default), vector, or fts
  • --detail=<level> - Context detail: minimal (default), contextual, or full

Examples:

# Search all stores
/bluera-knowledge:search "how to invalidate queries"

# Search specific store
/bluera-knowledge:search "useState implementation" --stores=react

# Search multiple stores (comma-separated)
/bluera-knowledge:search "deep clone" --stores=react,lodash

# Limit results
/bluera-knowledge:search "testing patterns" --limit=5

# Filter irrelevant results (returns empty if nothing is truly relevant)
/bluera-knowledge:search "kubernetes deployment" --min-relevance=0.4
## Search Results: "button component" (hybrid search)

**1. [Score: 0.95] [Vector+FTS]**
Store: react
File: 📄 src/components/Button.tsx
Purpose: → Reusable button component with variants
Top Terms: 🔑 (in this chunk): button, variant, size, color, onClick
Imports: 📦 (in this chunk): React, clsx

**2. [Score: 0.87] [Vector]**
Store: react
File: 📄 src/hooks/useButton.ts
Purpose: → Custom hook for button state management
Top Terms: 🔑 (in this chunk): hook, state, pressed, disabled
Imports: 📦 (in this chunk): useState, useCallback

**3. [Score: 0.81] [Vector+FTS]**
Store: react
File: 📄 src/components/IconButton.tsx
Purpose: → Button component with icon support
Top Terms: 🔑 (in this chunk): icon, button, aria-label, accessible

---
**Found 3 results in 45ms**

💡 **Next Steps:**
- Read file: `Read src/components/Button.tsx`
- Get full code: `mcp__bluera-knowledge__get_full_context("result-id")`
- Refine search: Use keywords above

📋 /bluera-knowledge:stores

List all indexed knowledge stores

/bluera-knowledge:stores

Shows store name, type, ID, and source location in a clean table format.

| Name | Type | ID | Source |
|------|------|----|--------------------|
| react | repo | 459747c7 | https://github.com/facebook/react |
| crawl4ai | repo | b5a72a94 | https://github.com/unclecode/crawl4ai.git |
| project-docs | file | 70f6309b | ~/repos/my-project/docs |
| claude-docs | web | 9cc62018 | https://code.claude.com/docs |

**Total**: 4 stores

🔄 /bluera-knowledge:index

Re-index an existing store to update the search index

/bluera-knowledge:index <store-name-or-id>

🔄 When to re-index:

  • The source repository has been updated (for repo stores)
  • Files have been added or modified (for file stores)
  • Search results seem out of date

Example:

/bluera-knowledge:index react
✓ Indexing store: react...
✓ Indexed 1,247 documents in 3,421ms

Store search index is up to date!

🗑️ /bluera-knowledge:remove-store

Delete a knowledge store and all associated data

/bluera-knowledge:remove-store <store-name-or-id>

🗑️ What gets deleted:

  • Store registry entry
  • LanceDB search index (vector embeddings)
  • Cloned repository files (for repo stores created from URLs)

Example:

/bluera-knowledge:remove-store react
Store "react" deleted successfully.

Removed:
- Store registry entry
- LanceDB search index
- Cloned repository files

🌐 /bluera-knowledge:crawl

Crawl web pages with natural language control

/bluera-knowledge:crawl <url> <store-name> [options]

Options:

  • --crawl "<instruction>" - Natural language instruction for which pages to crawl
  • --extract "<instruction>" - Natural language instruction for what content to extract
  • --simple - Use simple BFS mode instead of intelligent crawling
  • --max-pages <n> - Maximum pages to crawl (default: 50)
  • --fast - Use fast axios-only mode (may fail on JavaScript-heavy sites)

⚙️ Requirements:

  • 🐍 Python 3 with crawl4ai package installed
  • 📦 Web store is auto-created if it doesn't exist

Examples:

# Basic crawl
/bluera-knowledge:crawl https://docs.example.com/guide my-docs

# Intelligent crawl with custom strategy
/bluera-knowledge:crawl https://react.dev react-docs --crawl "all API reference pages"

# Extract specific content from pages
/bluera-knowledge:crawl https://example.com/pricing pricing --extract "pricing tiers and features"

# Combine crawl strategy + extraction
/bluera-knowledge:crawl https://docs.python.org python-docs \
  --crawl "standard library modules" \
  --extract "function signatures and examples"

# JavaScript-rendered sites work by default (uses headless browser)
/bluera-knowledge:crawl https://nextjs.org/docs nextjs-docs --max-pages 30

# Fast mode for static HTML sites (axios-only, faster but may miss JS content)
/bluera-knowledge:crawl https://example.com/static static-docs --fast --max-pages 100

# Simple BFS mode (no AI guidance)
/bluera-knowledge:crawl https://example.com/docs docs --simple --max-pages 100

The crawler converts pages to markdown and indexes them for semantic search.


🔁 /bluera-knowledge:sync

Sync stores from definitions config (bootstrap on fresh clone)

/bluera-knowledge:sync [options]

Options:

  • --dry-run - Show what would happen without making changes
  • --prune - Remove stores not in definitions
  • --reindex - Re-index existing stores after sync

Use cases:

  • Fresh clone: Recreate all stores defined by the team
  • Check status: See which stores exist vs. defined
  • Clean up: Remove orphan stores not in config

Examples:

# Preview what would be synced
/bluera-knowledge:sync --dry-run

# Sync all stores from definitions
/bluera-knowledge:sync

# Sync and remove orphan stores
/bluera-knowledge:sync --prune

How it works:

  1. Reads store definitions from .bluera/bluera-knowledge/stores.config.json
  2. Creates any stores that don't exist locally
  3. Reports orphan stores (local stores not in definitions)
  4. Optionally prunes orphans with --prune

🕷️ Crawler Architecture

The crawler defaults to headless mode (Playwright) for maximum compatibility with modern JavaScript-rendered sites. Use --fast for static HTML sites when speed is critical.

🎭 Default Mode (Headless - JavaScript-Rendered Sites)

By default, the crawler uses Playwright via crawl4ai to render JavaScript content:

sequenceDiagram
    participant User
    participant CLI
    participant IntelligentCrawler
    participant Axios
    participant Claude

    User->>CLI: crawl URL --crawl "instruction"
    CLI->>IntelligentCrawler: crawl(url, {useHeadless: true})
    IntelligentCrawler->>PythonBridge: fetchHeadless(url)
    PythonBridge->>crawl4ai: AsyncWebCrawler.arun(url)
    crawl4ai->>Playwright: Launch browser & render JS
    Playwright-->>crawl4ai: Rendered HTML
    crawl4ai-->>PythonBridge: {html, markdown, links}
    PythonBridge-->>IntelligentCrawler: Rendered HTML
    IntelligentCrawler->>Claude: determineCrawlUrls(html, instruction)
    Note over Claude: Natural language instruction<br/>STILL FULLY ACTIVE
    Claude-->>IntelligentCrawler: [urls to crawl]
    loop For each URL
        IntelligentCrawler->>PythonBridge: fetchHeadless(url)
        PythonBridge->>crawl4ai: Render JS
        crawl4ai-->>PythonBridge: HTML
        PythonBridge-->>IntelligentCrawler: HTML
        IntelligentCrawler->>IntelligentCrawler: Convert to markdown & index
    end

⚡ Fast Mode (Static Sites - --fast)

For static HTML sites, use --fast for faster crawling with axios:

sequenceDiagram
    participant User
    participant CLI
    participant IntelligentCrawler
    participant Axios
    participant Claude

    User->>CLI: crawl URL --crawl "instruction" --fast
    CLI->>IntelligentCrawler: crawl(url, {useHeadless: false})
    IntelligentCrawler->>Axios: fetchHtml(url)
    Axios-->>IntelligentCrawler: Static HTML
    IntelligentCrawler->>Claude: determineCrawlUrls(html, instruction)
    Claude-->>IntelligentCrawler: [urls to crawl]
    loop For each URL
        IntelligentCrawler->>Axios: fetchHtml(url)
        Axios-->>IntelligentCrawler: HTML
        IntelligentCrawler->>IntelligentCrawler: Convert to markdown & index
    end

🔑 Key Points

  • 🎭 Default to headless - Maximum compatibility with modern JavaScript-rendered sites (React, Vue, Next.js)
  • ⚡ Fast mode available - Use --fast for static HTML sites when speed is critical
  • 🧠 Intelligent crawling preserved - Claude Code CLI analyzes pages and selects URLs in both modes
  • 🔄 Automatic fallback - If headless fetch fails, automatically falls back to axios

🤖 Intelligent Mode vs Simple Mode

The crawler operates in two modes depending on Claude Code CLI availability:

| Mode | Requires Claude CLI | Behavior | |------|---------------------|----------| | Intelligent | ✅ Yes | Claude analyzes pages and selects URLs based on natural language instructions | | Simple (BFS) | ❌ No | Breadth-first crawl up to max depth (2 levels) |

Automatic detection:

  • When Claude Code CLI is available: Full intelligent mode with --crawl and --extract instructions
  • When Claude Code CLI is unavailable: Automatically uses simple BFS mode
  • Clear messaging: "Claude CLI not found, using simple crawl mode"

[!NOTE] Install Claude Code to unlock --crawl (AI-guided URL selection) and --extract (AI content extraction). Without it, web crawling still works but uses simple BFS mode.


🔧 Troubleshooting

Ensure the plugin is installed and enabled:

/plugin list
/plugin enable bluera-knowledge

If the plugin isn't listed, install it:

/plugin marketplace add blueraai/bluera-marketplace
/plugin install bluera-knowledge@bluera

Check Python dependencies:

python3 --version  # Should be 3.8+
pip install crawl4ai

The plugin attempts to auto-install crawl4ai on first use, but manual installation may be needed in some environments.

  1. Verify store exists: /bluera-knowledge:stores
  2. Check store is indexed: /bluera-knowledge:index <store-name>
  3. Try broader search terms
  4. Verify you're searching the correct store with --stores=<name>

List all stores to see available names and IDs:

/bluera-knowledge:stores

Use the exact store name or ID shown in the table.

Large repositories (10,000+ files) take longer to index. If indexing fails:

  1. Check available disk space
  2. Ensure the source repository/folder is accessible
  3. For repo stores, verify git is installed: git --version
  4. Check for network connectivity (for repo stores)

This means intelligent crawling is unavailable. The crawler will automatically use simple BFS mode instead.

To enable intelligent crawling with --crawl and --extract:

  1. Install Claude Code: https://claude.com/code
  2. Ensure claude command is in PATH: which claude

Simple mode still crawls effectively—it just doesn't use AI to select which pages to crawl or extract specific content.


🎯 Use Cases

📦 Dependency Source Code

Provide AI agents with canonical dependency implementation details:

/bluera-knowledge:suggest
/bluera-knowledge:add-repo https://github.com/expressjs/express

# AI agents can now:
# - Semantic search: "middleware error handling"
# - Direct access: Grep/Glob through the cloned express repo

📚 Project Documentation

Make project-specific documentation available:

/bluera-knowledge:add-folder ./docs --name=project-docs
/bluera-knowledge:add-folder ./architecture --name=architecture

# AI agents can search across all documentation or access specific files

📏 Coding Standards

Provide definitive coding standards and best practices:

/bluera-knowledge:add-folder ./company-standards --name=standards
/bluera-knowledge:add-folder ./api-specs --name=api-docs

# AI agents reference actual company standards, not generic advice

🔀 Mixed Sources

Combine canonical library code with project-specific patterns:

/bluera-knowledge:add-repo https://github.com/facebook/react --name=react
/bluera-knowledge:add-folder ./docs/react-patterns --name=react-patterns

# Search across both dependency source and team patterns

💭 What Claude Code Says About Bluera Knowledge

As an AI coding assistant, here's what I've discovered using this plugin


⚡ The Immediate Impact

The difference is immediate. When a user asks "how does React's useEffect cleanup work?", I can search the actual React source code indexed locally instead of relying on my training data or making web requests. The results include the real implementation, related functions, and usage patterns—all in ~100ms.

Code graph analysis changes the game. The plugin doesn't just index files—it builds a relationship graph showing which functions call what, import dependencies, and class hierarchies. When I search for a function, I see how many places call it and what it calls. This context makes my suggestions dramatically more accurate.


🔀 Multi-Modal Search Power

I can combine three search approaches in a single workflow:

| Mode | Use Case | Example | |------|----------|---------| | 🧠 Semantic | Conceptual queries | "authentication flow with JWT validation" | | 📂 Direct Access | Pattern matching | Grep for specific identifiers in cloned repos | | 📝 Full-Text | Exact matches | Find precise function names or imports |

This flexibility means I can start broad (semantic) and narrow down (exact file access) seamlessly.


🕷️ Intelligent Crawling

The --crawl instruction isn't marketing—it actually uses Claude Code CLI to analyze each page and intelligently select which links to follow. I can tell it "crawl all API reference pages but skip blog posts" and it understands the intent.

For JavaScript-rendered sites (Next.js, React docs), the default headless mode renders pages with Playwright while I still control the crawl strategy with natural language. Use --fast when you need speed on static sites.


✨ What Makes It Valuable

| Benefit | Impact | |---------|--------| | ✅ No guessing | I read actual source code, not blog interpretations | | 🔌 Offline first | Works without internet, zero rate limits | | 🎯 Project-specific | Index your team's standards, not generic advice | | ⚡ Speed | Sub-100ms searches vs 2-5 second web lookups | | 📚 Completeness | Tests, implementation details, edge cases—all indexed |


🌟 When It Shines Most

  1. Deep library questions - "how does this internal method handle edge cases?"
  2. Version-specific answers - your indexed version is what you're actually using
  3. Private codebases - your docs, your standards, your patterns
  4. Complex workflows - combining semantic search + direct file access + code graph

The plugin essentially gives me a photographic memory of your dependencies and documentation.

Instead of "I think based on training data", I can say "I searched the indexed React v18.2.0 source and found this in ReactFiberWorkLoop.js:1247".

That's the difference between helpful and authoritative.


🔧 Dependencies

The plugin automatically checks for and attempts to install Python dependencies on first use:

Required:

  • 🐍 Python 3.8+ - Required for web crawling functionality
  • 🕷️ crawl4ai - Required for web crawling (auto-installed via SessionStart hook)
  • 🎭 Playwright browser binaries - Required for default headless mode (auto-installed via SessionStart hook)

What the SessionStart hook installs:

  • ✅ crawl4ai Python package (includes playwright as dependency)
  • ✅ Playwright Chromium browser binaries (auto-installed after crawl4ai)

If auto-installation fails, install manually:

pip install crawl4ai
playwright install chromium

[!NOTE] The plugin will work without crawl4ai/playwright, but web crawling features (/bluera-knowledge:crawl) will be unavailable. The default mode uses headless browser for maximum compatibility with JavaScript-rendered sites. Use --fast for static sites when speed is critical.

Update Plugin:

/plugin update bluera-knowledge

🔌 MCP Integration

The plugin includes a Model Context Protocol server that exposes search tools. This is configured inline in .claude-plugin/plugin.json:

[!IMPORTANT] Commands vs MCP Tools: You interact with the plugin using /bluera-knowledge: slash commands. Behind the scenes, these commands instruct Claude Code to use MCP tools (mcp__bluera-knowledge__*) which handle the actual operations. Commands provide the user interface, while MCP tools are the backend that AI agents use to access your knowledge stores.

{
  "mcpServers": {
    "bluera-knowledge": {
      "command": "node",
      "args": ["${CLAUDE_PLUGIN_ROOT}/dist/mcp/server.js"],
      "env": {
        "PROJECT_ROOT": "${PWD}",
        "DATA_DIR": ".bluera/bluera-knowledge/data",
        "CONFIG_PATH": ".bluera/bluera-knowledge/config.json"
      }
    }
  }
}

🎯 Context Efficiency Strategy

Why only 3 MCP tools?

Every MCP tool exposed requires its full schema to be sent to Claude with each tool invocation. More tools = more tokens consumed before Claude can even respond.

Design decision: Consolidate from 10+ tools down to 3:

| Approach | Tool Count | Context Cost | Trade-off | |----------|------------|--------------|-----------| | Individual tools | 10+ | ~800+ tokens | Simple calls, high overhead | | Consolidated (current) | 3 | ~300 tokens | Minimal overhead, slightly longer commands |

How it works:

  1. Native tools for common workflow - search and get_full_context are the operations Claude uses most often, so they get dedicated tools with full schemas

  2. Meta-tool for management - The execute tool consolidates 8 store/job management commands into a single tool. Commands are discovered on-demand via execute("commands") or execute("help", {command: "store:create"})

  3. Lazy documentation - Command help isn't pre-sent with tool listings; it's discoverable when needed

Result: ~60% reduction in context overhead for MCP tool listings, without sacrificing functionality.

[!TIP] This pattern—consolidating infrequent operations into a meta-tool while keeping high-frequency operations native—is a general strategy for MCP context efficiency.

🛠️ Available MCP Tools

The plugin exposes 3 MCP tools optimized for minimal context overhead:

search

🔍 Semantic vector search across all indexed stores or a specific subset. Returns structured code units with relevance ranking.

Parameters:

  • query - Search query (natural language, patterns, or type signatures)
  • intent - Search intent: find-pattern, find-implementation, find-usage, find-definition, find-documentation
  • detail - Context level: minimal, contextual, or full
  • limit - Maximum results (default: 10)
  • stores - Array of specific store IDs to search (optional, searches all stores if not specified)

get_full_context

📖 Retrieve complete code and context for a specific search result by ID.

Parameters:

  • resultId - The result ID from a previous search

execute

⚡ Meta-tool for store and job management. Consolidates 8 operations into one tool with subcommands.

Parameters:

  • command - Command to execute (see below)
  • args - Command-specific arguments (optional)

Available commands: | Command | Args | Description | |---------|------|-------------| | stores | type? | List all knowledge stores | | store:info | store | Get detailed store information including file path | | store:create | name, type, source, branch?, description? | Create a new store | | store:index | store | Re-index an existing store | | store:delete | store | Delete a store and all data | | stores:sync | dryRun?, prune?, reindex? | Sync stores from definitions config | | jobs | activeOnly?, status? | List background jobs | | job:status | jobId | Check specific job status | | job:cancel | jobId | Cancel a running job | | help | command? | Show help for commands | | commands | - | List all available commands |


🖥️ CLI Tool

While Bluera Knowledge works seamlessly as a Claude Code plugin, it's also available as a standalone CLI tool for use outside Claude Code.

[!NOTE] When using CLI without Claude Code installed, web crawling uses simple BFS mode. Install Claude Code to unlock --crawl (AI-guided URL selection) and --extract (AI content extraction) instructions.

Installation

Install globally via npm:

npm install -g bluera-knowledge

Or use in a project:

npm install --save-dev bluera-knowledge

Usage

Create a Store

# Add a Git repository
bluera-knowledge store create react --type repo --source https://github.com/facebook/react

# Add a local folder
bluera-knowledge store create my-docs --type file --source ./docs

# Add a web crawl
bluera-knowledge store create fastapi-docs --type web --source https://fastapi.tiangolo.com

Index a Store

bluera-knowledge index react

Search

# Search across all stores
bluera-knowledge search "how does useEffect work"

# Search specific stores
bluera-knowledge search "routing" --stores react,vue

# Get more results with full content
bluera-knowledge search "middleware" --limit 20 --include-content

# Filter irrelevant results (returns empty if nothing is truly relevant)
bluera-knowledge search "kubernetes deployment" --min-relevance 0.4

# Get JSON output with confidence and raw scores
bluera-knowledge search "express middleware" --format json

Search Options:

  • -s, --stores <stores> - Comma-separated store names/IDs
  • -m, --mode <mode> - hybrid (default), vector, or fts
  • -n, --limit <count> - Max results (default: 10)
  • -t, --threshold <score> - Min normalized score (0-1)
  • --min-relevance <score> - Min raw cosine similarity (0-1)
  • --include-content - Show full content in results
  • --detail <level> - minimal, contextual, or full

List Stores

bluera-knowledge store list
bluera-knowledge store list --type repo  # Filter by type

Store Info

bluera-knowledge store info react

Delete a Store

bluera-knowledge store delete old-store

Global Options

--config <path>      # Custom config file
--data-dir <path>    # Custom data directory
--format <format>    # Output format: json | table | plain
--quiet              # Suppress non-essential output
--verbose            # Enable verbose logging

When to Use CLI vs Plugin

Use CLI when:

  • Using an editor other than Claude Code (VSCode, Cursor, etc.)
  • Integrating into CI/CD pipelines
  • Scripting or automation
  • Pre-indexing dependencies for teams

Use Plugin when:

  • Working within Claude Code
  • Want slash commands (/bluera-knowledge:search)
  • Need Claude to automatically query your knowledge base
  • Want Skills to guide optimal usage

Both interfaces use the same underlying services, so you can switch between them seamlessly.


🎓 Skills for Claude Code

[!NOTE] Skills are a Claude Code-specific feature. They're automatically loaded when using the plugin but aren't available when using the npm package directly.

Bluera Knowledge includes built-in Skills that teach Claude Code how to use the plugin effectively. Skills provide procedural knowledge that complements the MCP tools.

📚 Available Skills

knowledge-search

Teaches the two approaches for accessing dependency sources:

  • Vector search via MCP/slash commands for discovery
  • Direct Grep/Read access to cloned repos for precision

When to use: Understanding how to query indexed libraries

when-to-query

Decision guide for when to query BK stores vs using Grep/Read on current project.

When to use: Deciding whether a question is about libraries or your project code

advanced-workflows

Multi-tool orchestration patterns for complex operations.

When to use: Progressive library exploration, adding libraries, handling large results

search-optimization

Guide on search parameters and progressive detail strategies.

When to use: Optimizing search results, choosing the right intent and detail level

store-lifecycle

Best practices for creating, indexing, and managing stores.

When to use: Adding new stores, understanding when to use repo/folder/crawl

🔄 MCP + Skills Working Together

Skills teach how to use the MCP tools effectively:

  • MCP provides the capabilities (search, get_full_context, execute commands)
  • Skills provide procedural knowledge (when to use which tool, best practices, workflows)

This hybrid approach reduces unnecessary tool calls and context usage while maintaining universal MCP compatibility.

Example:

  • MCP tool: search(query, intent, detail, limit, stores)
  • Skill teaches: Which intent for your question type, when to use detail='minimal' vs 'full', how to narrow with stores

Result: Fewer tool calls, more accurate results, less context consumed.


💾 Data Storage

Knowledge stores are stored in your project root:

<project-root>/.bluera/bluera-knowledge/
├── data/
│   ├── repos/<store-id>/       # Cloned Git repositories
│   ├── documents_*.lance/      # Vector indices (Lance DB)
│   └── stores.json             # Store registry
├── stores.config.json          # Store definitions (git-committable!)
└── config.json                 # Configuration

📋 Store Definitions (Team Sharing)

Store definitions are automatically saved to .bluera/bluera-knowledge/stores.config.json. This file is designed to be committed to git, allowing teams to share store configurations.

Example stores.config.json:

{
  "version": 1,
  "stores": [
    { "type": "file", "name": "my-docs", "path": "./docs" },
    { "type": "repo", "name": "react", "url": "https://github.com/facebook/react" },
    { "type": "web", "name": "api-docs", "url": "https://api.example.com/docs", "depth": 2 }
  ]
}

When a teammate clones the repo, they can run /bluera-knowledge:sync to recreate all stores locally.

🚫 Recommended .gitignore Patterns

When you first create a store, the plugin automatically updates your .gitignore with:

# Bluera Knowledge - data directory (not committed)
.bluera/
!.bluera/bluera-knowledge/
!.bluera/bluera-knowledge/stores.config.json

This ensures:

  • Vector indices and cloned repos are NOT committed (they're large and can be recreated)
  • Store definitions ARE committed (small JSON file for team sharing)

🛠️ Development

🚀 Setup

git clone https://github.com/blueraai/bluera-knowledge.git
cd bluera-knowledge
bun install
bun run build
bun test

Note: This project uses Bun for development. Install it via curl -fsSL https://bun.sh/install | bash

⚙️ Claude Code Settings (Recommended)

For the best development experience with Claude Code, copy the example settings file:

cp .claude/settings.local.json.example .claude/settings.local.json

This provides:

  • Smart validation - Automatically runs lint/typecheck after editing code (file-type aware)
  • No permission prompts - Pre-approves common commands (lint, typecheck, precommit)
  • Desktop notifications - macOS notifications when Claude needs your input
  • Plugin auto-enabled - Automatically enables the bluera-knowledge plugin
  • Faster workflow - Catch issues immediately without manual validation

The validation is intelligent - it only runs checks for TypeScript/JavaScript files, skipping docs/config to save time.

Note: The .claude/settings.local.json file is gitignored (local to your machine). The example file is checked in for reference.

🐕 Dogfooding

Develop this plugin while using it with Claude Code:

claude --plugin-dir /path/to/bluera-knowledge

This loads the plugin directly from source. Changes take effect on Claude Code restart (no reinstall needed).

| What to test | Approach | |--------------|----------| | Commands (/search, /add-repo) | --plugin-dir (changes need restart) | | Hooks (job status, dependencies) | --plugin-dir (changes need restart) | | MCP tools (compiled) | --plugin-dir (run bun run build first) | | MCP tools (live TypeScript) | ~/.claude.json dev server (see below) |

🔌 MCP Server Development

Production mode (mcp.plugin.json):

  • Uses ${CLAUDE_PLUGIN_ROOT}/dist/mcp/server.js (compiled)
  • Distributed with plugin, no extra setup needed

Development mode (live TypeScript):

For instant feedback when editing MCP server code, add a dev server to ~/.claude.json:

{
  "mcpServers": {
    "bluera-knowledge-dev": {
      "command": "npx",
      "args": ["tsx", "/path/to/bluera-knowledge/src/mcp/server.ts"],
      "env": {
        "PWD": "${PWD}",
        "DATA_DIR": "${PWD}/.bluera/bluera-knowledge/data",
        "CONFIG_PATH": "${PWD}/.bluera/bluera-knowledge/config.json"
      }
    }
  }
}

This creates a separate bluera-knowledge-dev MCP server that runs source TypeScript directly via tsx - no rebuild needed for MCP changes

📜 Commands

| Command | Description | When to Use | |---------|-------------|-------------| | bun run build | 🏗️ Compile TypeScript to dist/ | Before testing CLI, after code changes | | bun run dev | 👀 Watch mode compilation | During active development | | bun start | ▶️ Run the CLI | Execute CLI commands directly | | bun test | 🧪 Run tests in watch mode | During TDD/active development | | bun run test:run | ✅ Run tests once | Quick verification | | bun run test:coverage | 📊 Run tests with coverage | Before committing, CI checks | | bun run lint | 🔍 Run ESLint | Check code style issues | | bun run typecheck | 🔒 Run TypeScript type checking | Verify type safety | | bun run precommit | ✨ Smart validation (file-type aware) | Runs only relevant checks based on changed files | | bun run prepush | 📊 Smart coverage (skips for docs/config) | Runs coverage only when src/tests changed | | bun run lint:quiet | 🔇 ESLint (minimal output) | Used by git hooks | | bun run typecheck:quiet | 🔇 Type check (minimal output) | Used by git hooks | | bun run test:changed:quiet | 🔇 Test changed files (minimal output) | Used by git hooks | | bun run test:coverage:quiet | 🔇 Coverage (minimal output) | Used by git hooks | | bun run build:quiet | 🔇 Build (minimal output) | Used by git hooks |

🔄 Automatic Build & Dist Commit

The dist/ directory must be committed because Claude Code plugins are installed by copying files—there's no build step during installation.

Good news: This is fully automatic!

  1. On every commit, the pre-commit hook intelligently validates based on file types
  2. If source/config changed, it runs build and automatically stages dist/ via git add dist/
  3. You never need to manually build or stage dist — just commit your source changes

For live rebuilding during development:

bun run dev  # Watches for changes and rebuilds instantly

This is useful when testing CLI commands locally, but not required for committing — the hook handles everything.

| bun run version:patch | 🔢 Run quality checks, then bump patch version (0.0.x) | Bug fixes, minor updates | | bun run version:minor | 🔢 Run quality checks, then bump minor version (0.x.0) | New features, backwards compatible | | bun run version:major | 🔢 Run quality checks, then bump major version (x.0.0) | Breaking changes |

🚀 Releasing

# Bump version, commit, tag, and push (triggers GitHub Actions release)
bun run release:patch   # Bug fixes (0.0.x)
bun run release:minor   # New features (0.x.0)
bun run release:major   # Breaking changes (x.0.0)

Workflow (Fully Automated):

  1. Make changes and commit
  2. Bump version: bun run version:patch (runs quality checks first, then updates package.json, plugin.json, README, CHANGELOG)
  3. Commit version bump: git commit -am "chore: bump version to X.Y.Z"
  4. Push to main: git push
  5. GitHub Actions automatically:
    • ✅ Runs CI (lint, typecheck, tests, build)
    • ✅ Creates release tag when CI passes
    • ✅ Creates GitHub release
    • ✅ Updates marketplace

Note: The version command runs full quality checks (format, lint, deadcode, typecheck, coverage, build) BEFORE bumping to catch issues early.

💡 That's it! No manual tagging needed. Just push to main and the release happens automatically when CI passes.

🔍 Post-Release Validation

After a release, validate the npm package works correctly:

bun run validate:npm

This script:

  • Installs the latest bluera-knowledge from npm globally
  • Exercises all CLI commands (stores, add-folder, search, index, delete)
  • Writes detailed logs to logs/validation/npm-validation-*.log
  • Returns exit code 0 on success, 1 on failure

Use this to catch any packaging or runtime issues after npm publish.

🧪 Plugin Self-Test

Test all plugin functionality from within Claude Code:

/test-plugin

This command runs 13 tests covering:

  • MCP Tools: execute (help, stores, create, info, index), search, get_full_context
  • Slash Commands: /stores, /search, /suggest
  • Cleanup: Store deletion, artifact removal, verification

The test creates temporary content, exercises all features, and cleans up automatically. Use this to verify the plugin is working correctly after installation or updates.

🧪 Testing Locally

Option 1: Development MCP Server (Recommended)

Use the local development MCP server (see "MCP Server" section above) which runs your source code directly via tsx:

  1. Set up dev MCP server in ~/.claude.json (see MCP Server section)
  2. Test your changes - MCP server updates automatically as you edit code

Option 2: Test Plugin from Working Directory

Load the plugin directly from your development directory:

cd /path/to/bluera-knowledge
claude --plugin-dir .

The MCP config in plugin.json is only loaded when the directory is loaded as a plugin (via --plugin-dir or marketplace install), so there's no conflict with project-level MCP config.

Option 3: CLI Tool Testing

# Build and link
cd /path/to/bluera-knowledge
bun run build
bun link

# Now 'bluera-knowledge' command is available globally
cd ~/your-project
bluera-knowledge search "test query" my-store

For testing as an installed plugin: This requires publishing a new version to the marketplace.

📂 Project Structure

.claude-plugin/
└── plugin.json          # Plugin metadata (references mcp.plugin.json)

mcp.plugin.json          # MCP server configuration (plugin-scoped)
commands/                # Slash commands (auto-discovered)
skills/                  # Agent Skills (auto-discovered)
├── knowledge-search/    # How to access dependency sources
│   └── SKILL.md
├── when-to-query/       # When to query BK vs project files
│   └── SKILL.md
├── advanced-workflows/  # Multi-tool orchestration patterns
│   └── SKILL.md
├── search-optimization/ # Search parameter optimization
│   └── SKILL.md
└── store-lifecycle/     # Store management best practices
    └── SKILL.md
dist/                    # Built MCP server (committed for distribution)

src/
├── analysis/            # Dependency analysis & URL resolution
├── crawl/               # Web crawling with Python bridge
├── services/            # Index, store, and search services
├── mcp/                 # MCP server source
└── cli/                 # CLI entry point

tests/
├── integration/         # Integration tests
└── fixtures/            # Test infrastructure

🔬 Technologies

  • 🔌 Claude Code Plugin System with MCP server
  • ✅ Runtime Validation - Zod schemas for Python-TypeScript boundary
  • 🌳 AST Parsing - @babel/parser for JS/TS, Python AST module, tree-sitter for Rust and Go
  • 🗺️ Code Graph - Static analysis of function calls, imports, and class relationships
  • 🧠 Semantic Search - AI-powered vector embeddings with LanceDB
  • 📦 Git Operations - Native git clone
  • 💻 CLI - Commander.js
  • 🕷️ Web Crawling - crawl4ai with Playwright (headless browser)

🤝 Contributing

Contributions welcome! Please:

  1. 🍴 Fork the repository
  2. 🌿 Create a feature branch
  3. ✅ Add tests
  4. 📬 Submit a pull request

📄 License

MIT - See LICENSE for details.

🙏 Acknowledgments

This project includes software developed by third parties. See NOTICE for full attribution.

Key dependencies:


💬 Support