lokicms-plugin-vectors
v1.0.0
Published
Vector search and semantic search plugin for LokiCMS with multi-provider embedding support
Maintainers
Readme
LokiCMS Vectors Plugin
Vector search and semantic search plugin for LokiCMS with multi-provider embedding support.
Features
- Semantic Search: Find content by meaning, not just keywords
- Hybrid Search: Combine semantic and keyword matching for best results
- Multi-Provider Support: Ollama, OpenAI, and Local TF-IDF
- 9 MCP Tools: Full vector search management via Claude/AI agents
- Auto-Indexing: Automatically index new and updated entries
- Similar Content: Find entries similar to a given entry
Installation
npm install lokicms-plugin-vectorsQuick Start
1. Register the Plugin
import { createLokiCMS } from 'lokicms';
import vectorsPlugin from 'lokicms-plugin-vectors';
const cms = createLokiCMS({
plugins: [vectorsPlugin],
vectors: {
provider: 'ollama', // or 'openai', 'local'
autoIndex: true,
},
});2. Use via MCP Tools
The plugin registers 9 MCP tools that AI agents can use:
| Tool | Description |
|------|-------------|
| vectors_status | Get service status and statistics |
| vectors_search | Semantic search by query |
| vectors_hybrid_search | Combined semantic + keyword search |
| vectors_index_entry | Index a single entry |
| vectors_index_all | Batch index all entries |
| vectors_remove | Remove entry from index |
| vectors_reindex | Force reindex an entry |
| vectors_configure | Change provider settings |
| vectors_similar | Find similar entries |
3. Use in Code
import { getVectorService } from 'lokicms-plugin-vectors';
const vectorService = getVectorService();
// Semantic search
const results = await vectorService.search('machine learning tutorials', {
limit: 10,
minSimilarity: 0.5,
});
// Hybrid search
const hybridResults = await vectorService.hybridSearch('react hooks', {
keywordWeight: 0.3,
});
// Find similar content
const similar = await vectorService.findSimilar('entry_123', {
limit: 5,
});Configuration
Ollama (Default)
{
vectors: {
provider: 'ollama',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'all-minilm',
dimensions: 384,
},
},
}Requirements: Ollama running with an embedding model installed.
# Install Ollama and pull embedding model
ollama pull all-minilmOpenAI
{
vectors: {
provider: 'openai',
openai: {
apiKey: process.env.OPENAI_API_KEY,
model: 'text-embedding-3-small', // or 'text-embedding-3-large'
},
},
}Local TF-IDF
{
vectors: {
provider: 'local',
local: {
maxVocabSize: 10000,
minDocFreq: 2,
},
},
}No external dependencies. Good for development and testing.
MCP Tool Examples
Search for Content
{
"tool": "vectors_search",
"input": {
"query": "how to deploy to production",
"limit": 5,
"minSimilarity": 0.4
}
}Hybrid Search
{
"tool": "vectors_hybrid_search",
"input": {
"query": "typescript generics",
"keywordWeight": 0.3
}
}Index an Entry
{
"tool": "vectors_index_entry",
"input": {
"entryId": "post_abc123"
}
}Find Similar Entries
{
"tool": "vectors_similar",
"input": {
"entryId": "post_abc123",
"limit": 5
}
}Change Provider
{
"tool": "vectors_configure",
"input": {
"provider": "openai",
"openaiApiKey": "sk-...",
"openaiModel": "text-embedding-3-small"
}
}API Reference
VectorService
interface VectorService {
// Initialize the service
initialize(): Promise<void>;
// Check if service is ready
isReady(): Promise<boolean>;
// Get statistics
getStats(): Promise<VectorStats>;
// Semantic search
search(query: string, options?: SearchOptions): Promise<SearchResult>;
// Hybrid search (semantic + keyword)
hybridSearch(query: string, options?: HybridSearchOptions): Promise<SearchResult>;
// Index a single entry
indexEntry(entryId: string): Promise<IndexResult>;
// Index all entries
indexAll(options?: BatchIndexOptions): Promise<BatchIndexResult>;
// Remove from index
removeEntry(entryId: string): Promise<boolean>;
// Find similar entries
findSimilar(entryId: string, options?: SearchOptions): Promise<SearchResult>;
// Reconfigure provider
configure(config: Partial<VectorsPluginConfig>): Promise<void>;
// Get current provider
getProvider(): EmbeddingProvider;
}SearchOptions
interface SearchOptions {
limit?: number; // Max results (default: 10)
contentType?: string; // Filter by content type
minSimilarity?: number; // Threshold 0-1 (default: 0.3)
}HybridSearchOptions
interface HybridSearchOptions extends SearchOptions {
keywordWeight?: number; // Keyword vs semantic weight (default: 0.3)
}SearchResult
interface SearchResult {
results: SearchResultItem[];
total: number;
query: string;
took: number; // Milliseconds
mode: 'semantic' | 'hybrid';
}
interface SearchResultItem {
id: string;
entryId: string;
title: string;
slug: string;
contentType: string;
similarity: number; // 0-1
excerpt?: string;
}Provider Comparison
| Feature | Ollama | OpenAI | Local | |---------|--------|--------|-------| | Quality | High | Highest | Medium | | Speed | Fast | Varies | Fastest | | Cost | Free | Per token | Free | | Privacy | Local | Cloud | Local | | Offline | Yes | No | Yes | | Dimensions | 384 | 1536/3072 | Variable |
Best Practices
- Use Hybrid Search for user-facing search - combines semantic understanding with exact matching
- Set Appropriate Thresholds - start with 0.3 similarity, adjust based on results
- Index Strategically - enable auto-indexing or batch index during off-peak hours
- Choose the Right Provider:
- Development: Local TF-IDF (no setup required)
- Production with privacy: Ollama
- Best quality: OpenAI
License
MIT
