@arcaelas/rag
v1.2.0
Published
MCP server with Ollama + Vectra for semantic memory and RAG operations
Maintainers
Readme
@arcaelas/rag
MCP server with Ollama + Vectra for semantic memory and RAG operations.
Build intelligent AI agents with persistent semantic memory - Store, search, and retrieve knowledge using vector embeddings, powered by local Ollama models and Vectra vector database.
Features
- 🧠 Semantic memory with vector embeddings
- 🔍 Semantic search using similarity
- 📦 Bulk import/export via JSONL
- 🚀 Local-first with Ollama and Vectra
- 🔧 Zero configuration with sensible defaults
Prerequisites
- Node.js >= 18
- Ollama running locally
- An embedding model installed (e.g.,
ollama pull nomic-embed-text)
Installation
Using npx (recommended)
Add to your ~/.claude.json:
{
"mcpServers": {
"rag": {
"command": "npx",
"args": ["-y", "@arcaelas/rag"],
"env": {
"OLLAMA_HOSTNAME": "http://localhost:11434",
"OLLAMA_MODEL_NAME": "nomic-embed-text"
}
}
}
}Global installation
npm install -g @arcaelas/rag
# Or with yarn
yarn global add @arcaelas/ragThen in ~/.claude.json:
{
"mcpServers": {
"rag": {
"command": "rag",
"args": [],
"env": {
"OLLAMA_HOSTNAME": "http://localhost:11434",
"OLLAMA_MODEL_NAME": "nomic-embed-text"
}
}
}
}Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| OLLAMA_HOSTNAME | http://localhost:11434 | Ollama server URL |
| OLLAMA_MODEL_NAME | nomic-embed-text | Embedding model name |
Available Tools
save(content, metadata?)
Save knowledge to semantic memory database.
await save("TypeScript is a typed superset of JavaScript", {
category: "programming",
importance: "high"
})search(query, limit?)
Search knowledge base using semantic similarity.
await search("typed javascript", 5)list(offset?, limit?)
List stored memories with pagination.
await list(0, 10)get(id)
Retrieve specific memory by ID.
await get("uuid-here")tag(id, tag)
Add tag to a memory.
await tag("uuid-here", "important")destroy(id)
Permanently delete a memory.
await destroy("uuid-here")upload(filename)
Bulk import memories from JSONL file.
await upload("/path/to/memories.jsonl")
// Returns: { filename, done: 299, error: [{ line: 297, error: "..." }] }download(offset?, limit?, filename?)
Export memories to JSONL file.
await download(0, 100, "/path/to/export.jsonl")
// Returns: { filename, offset, limit, count: 303 }JSONL Format
Each line in the JSONL file should be:
{"context": "Your knowledge content here"}Optional fields:
{
"context": "Content here",
"metadata": {
"category": "programming",
"importance": "high",
"project": "my-project"
},
"tags": "tag1,tag2,tag3"
}Data Storage
Vector database is stored in:
- npx/global install:
~/.cache/@arcaelas/rag/data/ - Local install:
<project-root>/data/
Collection name: arcaelas_mcp_rag_collection
Development
# Clone repository
git clone https://github.com/arcaelas/rag.git
cd rag
# Install dependencies
yarn install
# Build
yarn build
# Run locally
yarn start
# Watch mode
yarn devContributing
Contributions are welcome! Please read our contributing guidelines before submitting PRs.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Security
See SECURITY.md for security policies and reporting vulnerabilities.
Changelog
See CHANGELOG.md for release history.
License
MIT © Miguel Guevara (Arcaela)
Links
Support
- 📧 Email: [email protected]
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
