@rembr/client
v1.0.3
Published
MCP client configuration for REMBR - semantic memory for AI agents, RLMs, and multi-agent systems
Maintainers
Readme
@rembr/client
Client configuration package for REMBR - semantic memory for AI agents and assistants.
What is REMBR?
REMBR is a hosted Model Context Protocol (MCP) server that provides persistent, searchable semantic memory for AI agents. It's designed for:
- GitHub Copilot Agents - Give your agents long-term memory
- Cursor - Persistent context for Cursor AI
- Windsurf - Cascade flow coordination with shared memory
- Claude Desktop - Enhanced context across conversations
- Recursive Learning Machines (RLMs) - Context management for recursive decomposition
- Multi-Agent Systems - Shared knowledge base across agent teams
Features
✨ 19 MCP Tools for comprehensive memory management:
- Core Memory: store, search, update, delete, list
- Advanced Search: phrase search, semantic search, metadata filtering
- Discovery: find similar memories, get embedding stats
- RLM Support: contexts, snapshots, memory graphs
- Analytics: usage stats, contradiction detection, insights
🔍 4 Search Modes:
- Hybrid (default) - 0.7 semantic + 0.3 text matching
- Semantic - Conceptual similarity (finds "OAuth" when you search "authentication")
- Text - Fast fuzzy keyword matching
- Phrase - Multi-word exact matching ("rate limiting" not "limit the rate")
🎯 Built for RLMs: Task isolation, metadata filtering, progressive refinement
💰 Affordable: From free (1,000 memories) to £99/mo (1M memories)
Installation
npm install -D @rembr/client
``` configure:
1. MCP server connections for VS Code, Cursor, Windsurf, and Claude Desktop
2. Agent instructions for each tool
3. Project-specific configuration files
Or run setup manually:
```bash
npx rembr setupWhat Gets Configured
MCP Server Connections
Adds REMBR MCP server to:
.vscode/mcp.json- VS Code + GitHub Copilot.cursor/mcp.json- Cursor.windsurf/mcp.json- Windsurf~/Library/Application Support/Claude/claude_desktop_config.json- Claude Desktop (global)
{
"servers": {
"rembr": {
"url": "https://rembr.ai/mcp",
"type": "http"
}
}
}Agent Instructions
Creates tool-specific instruction files:
1. GitHub Copilot (.github/agents/recursive-analyst.agent.md)
- Formal agent definition with tool access
- RLM pattern implementation
- Structured subagent protocol
2. Cursor (.cursorrules)
Cursor-specific REMBR integration patterns
Memory management examples
Subtask coordyour tool**
VS Code / GitHub Copilot:
- Open Settings (Cmd+, or Ctrl+,) → Search "MCP"
- Add to
.vscode/settings.json:{ "mcp.servers.rembr.env": { "REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx" } } - Reload window: Cmd+Shift+P → "Developer: Reload Window"
- Test:
@Recursive-Analyst what tasks have I worked on?
Cursor:
- Settings → MCP → rembr → Environment Variables
- Add
REMBR_API_KEY=rembr_live_xxxxxxxxxxxx - Restart Cursor
- Test: Ask Cursor to "search REMBR for authentication patterns"
Windsurf:
- Settings → MCP → rembr → Environment Variables
- Add
REMBR_API_KEY=rembr_live_xxxxxxxxxxxx - Restart Windsurf
- Test: Use REMBR tools in a Cascade flow
Claude Desktop:
- Edit
~/Library/Application Support/Claude/claude_desktop_config.json - Add under
mcpServers.rembr:{ "mcpServers": { "rembr": { "url": "https://rembr.ai/mcp", "type": "http", "env": { "REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx" } } } } - Restart Claude Desktop
- Test: "Use REMBR to remember that I prefer TypeScript"
Aider:
- Export in your shell:
export REMBR_API_KEY=rembr_live_xxxxxxxxxxxx - Use the bash aliases from
.aider.conf.yml - Test: `rembr-query "recent changes"
- Go to Dashboard → Settings → API Keys
- Create a new key
Configure VS Code
- Open VS Code Settings (Cmd+, or Ctrl+,)
- Search for "MCP"
- Add environment variable:
REMBR_API_KEY=your_key_here
Or add to
.vscode/settings.json:{ "mcp.servers.rembr.env": { "REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx" } }Reload VS Code
- Cmd+Shift+P → "Developer: Reload Window"
Test the connection
- Open GitHub Copilot Chat
- Try:
@Recursive-Analyst what tasks have I worked on recently?
Available MCP Tools
Once configured, these tools are available to all agents:
Memory Management
store_memory- Store new memories with categories and metadatasearch_memory- Hybrid semantic + text searchlist_memories- List recent memories by categoryget_memory- Retrieve specific memory by IDdelete_memory- Remove a memory
Context Management (RLM features)
create_context- Create workspace for related memoriesadd_memory_to_context- Link memories to contextssearch_context- Scoped search within a context In GitHub Copilot / Cursor / Windsurf / Claude Desktop:
store_memory({
category: "facts",
content: "The payment API uses Stripe webhooks for event processing",
metadata: {
area: "payments",
file: "src/webhooks/stripe.ts"
}
})In Aider (via bash alias): In MCP-enabled tools:
search_memory({
query: "how do we handle Stripe webhooks",
category: "facts",
limit: 5
})In Aider:
rembr-query "how do we handle Stripe webhooks" Memory Categories
Organize memories using semantic categories:
- **facts** - Concrete information and data points
- **preferences** - User preferences and settings
- **conversations** - Conversation history and context
- **projects** - Project-specific information
- **learning** - Knowledge and insights learned
- **goals** - Objectives and targets
- **context** - Situational context
- **reminders** - Future actions and reminders
## Example Usage
### Basic Memory Storage
```javascript
// In GitHub Copilot Chat or agent
store_memory({
category: "facts",
content: "The payment API uses Stripe webhooks for event processing",
metadata: {
area: "payments",
file: "src/webhooks/stripe.ts"
}
})
```/Flow Context
**GitHub Copilot (Subagents):**
```javascript
// Parent agent retrieves context for subagent
const context = search_memory({
query: "authentication middleware patterns",
category: "facts",
limit: 10
})
// Spawn subagent with this context
// Subagent stores findings with taskId metadata
// Parent retrieves subagent findings laterWindsurf (Cascade Flows):
// Before creating a flow, retrieve relevant context
const context = search_memory({
query: "API rate limiting patterns",
category: "facts"
})
// Create flow with this context
// Flow stores findings for other flows to useCursor / Claude Desktop:
// Before breaking down a task, query REMBR
const priorWork = search_memory({
query: "database migration strategies",
limit: 5
})
// Use context to inform implementation
// Store new insights back to REMBR
```javascript
// Parent agent retrieves context for subagent
const context = search_memory({
query: "authentication middleware patterns",
category: "facts",
limit: 10
})
// Spawn subagent with this context
// Subagent stores findings with taskId metadata
// Parent retrieves subagent findings laterPricing
- Free Dev Tier: 100 memories, 1,000 searches/day
- Pro: £10/month - 10,000 memories, 100,000 searches
- Team: £30/month - 100,000 memories, 1M searches
- Enterprise: £100/month - 1M memories, unlimited searches
All tiers include:
- Hybrid semantic + text search
- Multi-tenant isolation
- OAuth support for Claude Desktop
- Full API access
Documentation
- Website: https://rembr.ai
- API Docs: https://rembr.ai/docs
- MCP Spec: https://modelcontextprotocol.io
- RLM Paper: https://github.com/alexzhang13/rlm
Support
- Email: [email protected]
- Issues: https://github.com/radicalgeek/rembr-client/issues
- Slack: https://rembr.ai/slack
License
MIT
