@memly/mcp-server
v0.3.0
Published
Memly MCP Server — persistent memory for any IDE
Readme
@memly/mcp-server
Persistent memory for IDEs that don't support custom LLM providers.
Proxy vs MCP — which one should I use?
Memly has two integration modes:
| Mode | How it works | Best for |
|------|-------------|----------|
| Proxy (transparent) | Point your IDE's base URL to api.memly.site — memory injected in 100% of requests, invisible | IDEs that support a custom OpenAI-compatible endpoint |
| MCP server (this package) | Memly runs as an MCP tool — load_context is called automatically at session start | IDEs locked to their own AI service |
Which mode for my IDE?
✅ Use the Proxy — supports custom endpoint
| IDE / Tool | Where to configure |
|-----------|-------------------|
| Cursor | Settings → Models → OpenAI Base URL |
| Continue.dev (VS Code / JetBrains extension) | config.json → apiBase |
| Zed | settings.json → api_url |
| Aider | --openai-api-base CLI flag |
| Jan.ai | Settings → Model → Engine URL |
| LM Studio | Server settings |
| Open WebUI | Admin → Connections |
| Msty | Settings → Custom provider |
| Void | Settings → Custom provider |
❌ Use the MCP server — locked provider, no custom endpoint
| IDE / Tool | Reason | |-----------|--------| | VS Code + GitHub Copilot | Microsoft/GitHub auth — base URL not configurable | | JetBrains AI Assistant | Locked to JetBrains AI Service subscription | | Windsurf (Codeium) | Cascade uses Codeium's own models — no external endpoint | | Amazon Q / CodeWhisperer | AWS-only pipeline | | Tabnine | Proprietary closed service | | Replit AI | Built into Replit, no external provider | | Gitpod AI | Locked to their own service | | Sourcegraph Cody | Locked on free tier | | Claude Desktop | Chat app, not an IDE — MCP native |
Quick Start
1. Get your API key
Go to memly.site/dashboard/api-keys
2. Configure your IDE's MCP server
VS Code / GitHub Copilot — create .vscode/mcp.json in your project:
{
"servers": {
"memly": {
"command": "npx",
"args": ["-y", "@memly/mcp-server"],
"env": {
"MEMLY_API_KEY": "memly_your_key_here"
}
}
}
}JetBrains (IntelliJ, WebStorm, PyCharm…) — create .idea/mcp.json:
{
"servers": {
"memly": {
"command": "npx",
"args": ["-y", "@memly/mcp-server"],
"env": {
"MEMLY_API_KEY": "memly_your_key_here"
}
}
}
}Windsurf — Settings → Cascade → MCP Servers → Add:
- Command:
npx - Args:
-y @memly/mcp-server - Env:
MEMLY_API_KEY=memly_your_key_here
Claude Desktop — edit claude_desktop_config.json:
{
"mcpServers": {
"memly": {
"command": "npx",
"args": ["-y", "@memly/mcp-server"],
"env": {
"MEMLY_API_KEY": "memly_your_key_here"
}
}
}
}3. Run auto-setup once per project
Writes the instruction file for your IDE so load_context runs automatically at every session start:
npx @memly/mcp-server initAuto-detects VS Code, Cursor, Windsurf, Claude Desktop. Run once, never think about it again.
Tools
| Tool | When the AI calls it |
|------|---------------------|
| load_context | Automatically at session start — loads memories from previous sessions |
| search_memories | When you ask about something specific not loaded by load_context |
| remember | When you make a decision, solve a problem, or say "remember this" |
| list_projects | When you ask to list or switch projects |
Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| MEMLY_API_KEY | ✅ | — | Your Memly API key (memly_...) |
| MEMLY_API_URL | — | https://api.memly.site | Override for self-hosted deployments |
| MEMLY_PORT | — | 3800 | HTTP transport port (--http mode only) |
Self-Hosted
Run alongside your Memly proxy on your VPS for zero-latency:
MEMLY_API_KEY=memly_... bun run packages/mcp-server/src/index.ts --httpLicense
BSL-1.1 — Memly Community Edition
