@flowrag/provider-local
v1.5.2
Published
🏠 Local AI provider for FlowRAG - ONNX embeddings and reranking, fully offline
Maintainers
Readme
@flowrag/provider-local
Local AI provider for FlowRAG - ONNX embeddings and future local extraction.
Installation
npm install @flowrag/provider-localUsage
Embedder
import { LocalEmbedder } from '@flowrag/provider-local';
const embedder = new LocalEmbedder({
model: 'Xenova/e5-small-v2', // optional, default
dtype: 'q8', // optional: 'fp32', 'q8', 'q4'
device: 'auto', // optional: 'auto', 'cpu', 'gpu'
});
// Single embedding
const embedding = await embedder.embed('Hello world');
// Batch embeddings
const embeddings = await embedder.embedBatch(['Hello', 'World']);Supported Models
Xenova/e5-small-v2(384 dims) - Default, fastXenova/e5-base-v2(768 dims) - Better qualityXenova/e5-large-v2(1024 dims) - Best qualityXenova/all-MiniLM-L6-v2(384 dims) - CompactXenova/all-mpnet-base-v2(768 dims) - Good balance
Reranker
import { LocalReranker } from '@flowrag/provider-local';
const reranker = new LocalReranker();
// Uses Xenova/ms-marco-MiniLM-L-6-v2 cross-encoderEnvironment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| HF_HOME | Custom cache directory for downloaded models | node_modules/@huggingface/transformers/.cache/ |
License
MIT
