@unlikeotherai/remember-ninja-cli
v0.2.0
Published
Standalone remember.ninja CLI for durable local memory
Readme
remember
Persistent long-term memory for AI agents, developer tools, and automation — running locally on your machine.
remember is a CLI that gives AI agents and scripts durable memory between sessions. Store decisions, preferences, and lessons once. Retrieve them instantly later. Everything stays local in SQLite — no cloud dependency, no API keys, no latency.
Install
npm install -g @unlikeotherai/remember-ninja-cliQuick start
# Initialize local memory in your project
remember init
# Store a memory
remember put "project.auth.strategy" "JWT with refresh tokens" \
--context "Architecture review with the team"
# Retrieve it
remember get "project.auth.strategy"
# Search across all memories
remember search "auth"
# List everything
remember list --format tableWhy long-term memory matters
AI agents are stateless by default. Every session starts from zero. That means:
- The same architectural decisions get re-explained every conversation
- Debugging lessons vanish after the session that discovered them
- Preferences and conventions drift because nothing enforces consistency
- Agents can't build on previous work — they repeat it
remember fixes this by giving agents a persistent memory layer that survives between sessions. An agent that remembers your project uses JWT, that you prefer Tailwind over Bootstrap, that the database migration from last Tuesday broke the user table — that agent is dramatically more useful than one that asks you the same questions every time.
Capturing reasoning, not just intent
Most documentation captures what was decided but forgets to capture why. The reasoning and thought process behind a decision is often more valuable than the decision itself — it tells you when the decision should change.
That's the core idea behind remember. It's a repository for the thinking behind your projects. You can instruct your agents to store not just your preferences but the reasoning behind your instructions, so they understand your intent more deeply over time. "Use Drizzle" is useful. "Use Drizzle because Prisma's cold start was adding 3s to Lambda invocations" is a memory an agent can actually reason about.
These thought processes evolve. Yesterday's reasoning might not hold tomorrow. Memories are not permanent mandates — they're living context that can be superseded or retracted as your understanding changes. That's why remember tracks full version history and lets you update or retract anything at any time.
For local development
Point your AI tools at remember and they gain institutional knowledge about your codebase. Store architecture decisions, environment quirks, debugging lessons, and team conventions. The memory lives in .ninja/memory.db inside your project — version-controlled or gitignored, your choice.
For agents and automation
remember exposes an MCP (Model Context Protocol) server, so tools like Claude Code, Cursor, and Windsurf can read and write memories directly. Agents can store what they learn and retrieve it in future sessions without you copy-pasting context.
Designing prompts that use memory
To get the most out of remember, teach your LLM how to use it. Add instructions like these to your system prompt, CLAUDE.md, or agent configuration:
## Long-Term Memory
At the start of every session, load all memories:
remember list --format table
During a session, search by keyword:
remember search "auth"
remember search "database"
After resolving anything tricky, store it:
remember put "lessons.<topic>.<slug>" "<what you learned>" \
--context "<why it matters>"
remember put "preferences.<area>" "<value>" \
--context "<where this came from>"Recommended namespaces
Organize memories with dot-notation keypaths. A good convention:
| Namespace | Purpose | Example |
|-----------|---------|---------|
| preferences.* | How the developer likes things done | preferences.style.indent = "2 spaces" |
| decisions.* | Architectural and design choices | decisions.auth.provider = "Supabase" |
| lessons.* | Hard-won debugging insights | lessons.postgres.connection-pool = "max 20 in dev" |
| tools.* | Tooling setup and quirks | tools.docker.compose-profile = "dev" |
| project.* | Project-specific facts | project.deploy.target = "Fly.io" |
What to store vs what not to store
Store: stable facts, preferences, conventions, architecture decisions, recurring problem solutions, environment details.
Don't store: temporary task state, in-progress work, anything that changes every session, secrets or credentials.
Common commands
Storing and retrieving
# Store with context
remember put "project.db.orm" "Drizzle" --context "Chose over Prisma for performance"
# Get a specific memory
remember get "project.db.orm"
# Get raw value only (for scripts)
remember get "project.db.orm" --format raw
# Store complex values
remember put "project.endpoints" '["GET /users","POST /auth"]' --jsonSearching
# Keyword search (default)
remember search "database migration"
# Filter by keypath prefix
remember search "auth" --keypath-prefix "decisions"
# Exact match
remember search "Drizzle" --mode exactUpdating and removing
# Update a memory (supersede the old value)
remember supersede <assertion-id> "New value" --context "Requirements changed"
# Retract a memory (mark as invalid)
remember retract <assertion-id> --reason "No longer accurate"
# View full version history
remember history "project.db.orm"Linking related memories
# Create a relation between two memories
remember link "project.auth.strategy" "project.db.sessions" \
--relation depends_on --note "Sessions table backs JWT refresh"
# View all links
remember links "project.auth.strategy"Import, export, and stats
# Export all memories to a file
remember export --output memories.json
# Export as YAML
remember export --format yaml --output memories.yaml
# Import from file
remember import memories.json
# View storage stats
remember statsMCP server
Run remember as an MCP server so AI tools can use it directly:
# stdio transport (for Claude Code, Cursor, etc.)
remember mcp --transport stdio
# SSE transport (for network access)
remember mcp --transport sse --port 3777The MCP server exposes 11 tools (memory_put, memory_get, memory_search, memory_list, memory_history, memory_supersede, memory_retract, memory_conflicts, memory_export, memory_erase_scope, memory_erase_keypath) and 2 resource templates for direct integration.
Claude Code configuration
Add to your MCP settings:
{
"mcpServers": {
"remember": {
"command": "remember",
"args": ["mcp", "--transport", "stdio"]
}
}
}Scopes
Memories are scoped for multi-context isolation. The default scope is user:local.
# Store per-project
remember put "config.port" "3000" --scope project:myapp
# Store per-team
remember put "conventions.naming" "kebab-case" --scope team:engineering
# List within a scope
remember list --scope project:myappConfiguration
# View current config
remember config
# Change defaults
remember config set "search.default_mode" "hybrid"
remember config set "search.default_limit" "20"
# Enable semantic search (requires embedding provider)
remember config set "embedding.enabled" true
remember config set "embedding.provider" "openai"
remember config set "embedding.api_key_env" "OPENAI_API_KEY"Config lives in .ninja/config.json alongside the database.
Storage
All data is stored in .ninja/memory.db — a single SQLite file using WAL mode for performance. No external services required.
- Full-text search via SQLite FTS5
- Semantic search via optional embeddings
- Audit trail for every change (create, supersede, retract)
- Erasure receipts for compliance
- Atomic transactions for data integrity
remember.ninja Cloud
This CLI is designed for local development and single-machine workflows. For shared, production-grade memory across teams and services, remember.ninja provides a low-latency cloud service with sync, collaboration, and access control.
License
MIT License. See LICENSE for details.
Built by Ondrej Rafaj at Unlike Other AI
Made with love in Scotland
