snipara-companion
v1.1.10
Published
Snipara local companion CLI for hosted context, hooks, and automation workflows
Downloads
1,920
Maintainers
Readme
snipara-companion
Local helper CLI for Snipara agent workflows.
snipara-companion adds local diagnostics, hooks, folder onboarding, workflow
helpers, and command-line access around Snipara Hosted MCP. It complements the
hosted context and memory surface; it is not the primary runtime for agents.
In this repository, the source currently lives in packages/cli, and the installed executable is rlm-hook.
This package complements snipara-mcp. It does not replace it.
flowchart LR
Project["Local project"] --> Companion["rlm-hook"]
Companion --> Diagnostics["doctor, repair, sync, workflow helpers"]
Companion --> Hosted["Snipara Hosted MCP / API"]
Hosted --> Agents["Codex, Claude Code, Cursor, ChatGPT"]When To Use It
| If you need... | Install... |
| --------------------------------------------------------- | ------------------------ |
| MCP tools, OAuth login, project-scoped context and memory | snipara-mcp |
| One-command Hosted MCP + companion setup | create-snipara |
| Local helper CLI and hook-oriented automation | snipara-companion |
| OpenClaw-specific automation hooks | snipara-openclaw-hooks |
Codex Note
For Codex, the primary integration remains Hosted MCP plus AGENTS.md.
create-sniparainstallssnipara-companionby default for managed workflow commands.snipara-companionis still skippable with--profile hosted-onlyor--skip-companion.- This package does not currently scaffold a Codex-specific preset.
- Use it when compaction-safe phase commits, local doctor checks, or shared helper workflows are useful.
Installation
npm install -g snipara-companion
# or
pnpm add -g snipara-companion
# or
yarn global add snipara-companionInstalled Command
rlm-hookNew In 1.1.10
- Runtime guidance now points existing projects to
npx create-snipara repair --with-runtime - Managed workflow phases marked
needs_runtimesuggest Runtime installation only when needed
New In 1.1.4
rlm-hook onboard-folderpreviews and applies dashboardless business-folder imports from local or LLM-materialized sourcesrlm-hook workflow start/status/resume/phase-start/phase-commitkeeps a visible LLM plan in.snipara/workflow/current.jsonand persists each phase through hosted memory so compacted agents can resume safelyrlm-hook final-commitpersists the final workflow outcome withrlm_end_of_task_commitrlm-hook code symbol-cardandrlm-hook code impactexpose paid Context safeguards directly from the companion CLI
New In 1.1.2
rlm-hook doctorand Runtime hints detect provider keys from local.envfiles without printing secret values
New In 1.1.1
rlm-hook doctordiagnoses Snipara auth, RLM Runtime, Runtime MCP, provider keys, and Dockerworkflow runprints contextual RLM Runtime hints for full/orchestrated/execution-heavy workworkflow run --no-runtime-hinthides Runtime guidance for scripted terminal output
New In 1.1.0
business-collectionscommands for Team Business Context presets and reusable business docsclient-projectscommands for creating and listing project-scoped client context workspacesupload --metadata/--metadata-fileplus convenience metadata flags for single-file business/client uploads
New In 1.0.0
- direct
rlm-hook codeaccess forcallers,imports,neighbors, andshortest-path workflow run --mode auto|full|orchestratefor hosted-first workflow routingrlm-hook shared-contextfor project-linked standards and team guidance- automatic fallback to project token auth when a stale
SNIPARA_API_KEYoverrides a valid local login
Supported Client Presets Today
The built-in init flow currently supports:
claude-codecursorwindsurf
Quick Start
Claude Code
rlm-hook init --with-hooks --client claude-codeCursor
rlm-hook init --with-hooks --client cursorWindsurf
rlm-hook init --with-hooks --client windsurfCommands
rlm-hook init
Initialize local configuration and optionally generate client hook files.
rlm-hook initOptions:
--api-key <key>- Skip prompt for API key--project-id <id>- Skip prompt for project ID--client <client>-claude-code,cursor, orwindsurf--with-hooks- Install hooks automatically--force- Overwrite existing generated files--dir <directory>- Target directory for generated files
rlm-hook config
Show the current configuration.
rlm-hook configrlm-hook pre-tool
Resolve a query from tool input and fetch relevant context.
rlm-hook pre-tool '{"path":"/src/api/auth.ts"}'rlm-hook post-tool
Track file access for the current session.
rlm-hook post-tool '{"file_path":"/src/api/auth.ts"}'rlm-hook session-end
Persist the current session.
rlm-hook session-endrlm-hook session status
Show current session information.
rlm-hook session statusrlm-hook session reset
Start a new session ID locally.
rlm-hook session resetrlm-hook emit-event
Forward a canonical lifecycle event into Snipara's hosted automation API.
rlm-hook emit-event \
--event-type tool_call \
--payload '{"hook":"pre-tool","tool":"Read","query":"auth middleware"}'Use this when a thin local adapter needs to report lifecycle activity without owning durable memory policy locally.
Workflow Commands
These are thin local wrappers around hosted Snipara workflows:
npx -y snipara-companion@latest workflow run --mode auto --query "who imports src.rlm_engine"
npx -y snipara-companion@latest workflow run --mode full --include-session-context --query "plan the auth refactor"
npx -y snipara-companion@latest task-commit --summary "Shipped auth refactor" --files apps/web/src/lib/auth.ts
rlm-hook query --query "auth middleware"
rlm-hook query --query "who calls src.rlm_engine.RLMEngine._handle_context_query" --follow-recommendation
rlm-hook workflow start --goal "ship auth hardening" --plan-file ./plan.md
rlm-hook workflow status
rlm-hook workflow resume --include-session-context
rlm-hook workflow phase-start context
rlm-hook workflow run --mode auto --query "who imports src.rlm_engine"
rlm-hook workflow run --mode full --include-session-context --query "plan the auth refactor"
rlm-hook workflow run --mode full --no-runtime-hint --query "plan the auth refactor"
rlm-hook workflow phase-commit context --summary "Loaded context and mapped impacted files" --files src/auth.ts
rlm-hook workflow final-commit --summary "Shipped auth hardening and tests" --files src/auth.ts tests/auth.test.ts
rlm-hook final-commit --summary "Shipped auth hardening and tests" --files src/auth.ts tests/auth.test.ts
rlm-hook doctor
rlm-hook doctor --json
rlm-hook code callers --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code imports --file-path src/rlm_engine.py
rlm-hook code neighbors --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code shortest-path --from src.rlm_engine.RLMEngine._handle_multi_query.execute_single_query --to src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code symbol-card --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code impact --changed-files apps/web/src/lib/auth.ts tests/auth.test.ts --diff-summary "auth hardening"
rlm-hook plan --query "implement OAuth device flow"
rlm-hook upload --path docs/spec.md --file ./docs/spec.md
rlm-hook upload --path clients/acme/current.md --file ./current.md --asset-class BUSINESS_DOCUMENT --usage-mode current_truth --source-kind local_agent --client-id acme
rlm-hook upload --path diagrams/network.vsdx --file ./diagrams/network.vsdx --kind BINARY --format vsdx --reindex
rlm-hook upload --path docs/spec.md --file ./docs/spec.md --reindex
rlm-hook business-collections list
rlm-hook business-collections ensure --preset business_response_playbook
rlm-hook business-collections ensure --preset offer_templates
rlm-hook business-collections upload --preset offer_templates --title "Standard Offer Structure" --file ./offer-template.md
rlm-hook client-projects list
rlm-hook client-projects create --name "ACME Network Refresh" --slug acme-network-refresh
rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --write-manifest ./snipara-onboard.json
rlm-hook onboard-folder ./client-export --source-provider claude_notion --apply
rlm-hook sync-documents --dir ./docs --recursive --prefix docs --reindex
rlm-hook sync-documents --file ./snipara-documents.json --delete-missing --reindex
rlm-hook sync-documents --file ./snipara-business-context.json --dry-run --json
rlm-hook reindex --kind doc --mode incremental
rlm-hook reindex --job-id index_job_123
rlm-hook business-health --json
rlm-hook chunk get --chunk-id chunk_123
rlm-hook multi-query --queries "auth flow" "rate limiting"
rlm-hook orchestrate --query "understand the auth architecture"
rlm-hook load-document --path docs/auth.md
rlm-hook recall --query "What did we decide about auth retries?" --type decision
rlm-hook events recent --limit 20
rlm-hook session-bootstrap --max-critical-tokens 2000
rlm-hook session-bootstrap --include-session-context --max-context-tokens 1000
rlm-hook task-commit --summary "Shipped event ingestion and dashboard inspection" --files apps/web/src/components/automation/automation-settings-panel.tsxThe installed executable is rlm-hook. npx snipara-companion@latest ... is a convenience
shortcut for the same package; there is no separate snipara-workflow binary.
snipara-companion does not execute RLM Runtime jobs itself. Runtime MCP execute_python can run
without an extra LLM provider key because your AI client supplies the reasoning; standalone
rlm run and rlm agent need an OPENAI_API_KEY or ANTHROPIC_API_KEY.
For diagnostics and Runtime hints, companion also detects these keys in local .env, .env.local,
.env.development, and .env.development.local files without printing their values.
By default these commands print human-readable terminal output. Add --json when you want the raw
hosted response.
Context vs Memory
- Use
rlm-hook query,shared-context, andload-documentfor source truth. - Use
rlm-hook recall,session-bootstrap, andtask-commitfor durable memory. - Do not use memory as a substitute for document retrieval.
- Do not upload specs or raw documents into memory.
Semantics:
rlm-hook query --follow-recommendation= execute the hosted recommended structural tool instead of only printing itrlm-hook workflow run --mode auto= context query plus automaticrlm_code_*follow-up when Snipara recommends onerlm-hook workflow run --mode full= session bootstrap + context query + automatic structural follow-up + hosted planrlm-hook workflow run --mode orchestrate= explicit hosted orchestrator flow for deeper multi-step explorationrlm-hook workflow run= suggests RLM Runtime when the query calls for validation, execution, data transforms, or heavier FULL/orchestrated workrlm-hook workflow start --plan-file= records the visible LLM plan locally so phase state survives agent compactionrlm-hook workflow phase-start= marks the current phase and prints the required Snipara context gaterlm-hook workflow phase-commit= calls hostedrlm_end_of_task_commitfor that phase, updates local state, and advances the next phaserlm-hook workflow resume= reloads local workflow state plus hosted durable/session memory after compaction or resumerlm-hook final-commit/workflow final-commit= final hosted commit for the managed workflowrlm-hook code symbol-card= direct paid Contextrlm_code_symbol_cardfor an important symbol before editingrlm-hook code impact= direct paid Contextrlm_code_impactfor changed files, a file, or a symbol before risky changesrlm-hook doctor= local readiness check for Snipara auth, RLM Runtime, Runtime MCP wiring, provider keys, and Dockerrlm-hook upload --metadata/--metadata-file= single-file upload with the same business/client metadata fields supported by bulk syncrlm-hook business-collections= manage reusable Team Business Context collections (Business Response Playbook, Business Library, Offer Templates, Company Presentations, Reference Diagrams)rlm-hook client-projects= create/list project-scoped client context workspaces before uploading current client filesrlm-hook onboard-folder= business-first import for a local or LLM-materialized folder; it still detects code/mixed folders, but code repositories should use the GitHub OAuth/code onboarding pathrlm-hook sync-documents= bulkrlm_sync_documentsfor text and supported binary parser documents from a JSON payload or directoryrlm-hook sync-documents --dry-run= validate the local payload and business-context freshness metadata without uploadingrlm-hook business-health= hostedrlm_index_health, with thebusiness_contextsection surfaced for stale/reupload signalsrlm-hook reindex= trigger or poll hostedrlm_reindex; use after uploads when immediate chunk availability mattersrlm-hook code *= direct access to the code graph tools without routing throughrlm_context_queryrlm-hook recall= direct durable memory lookup for decisions, learnings, preferences, and carryoverrlm-hook session-bootstrap= durable memory first, optional weak session carryover secondrlm-hook task-commit= durable outcomes only--max-daily-tokensis still accepted as a compatibility alias for--max-context-tokens
Compaction-Safe LLM Plan Workflow
Use this when the user's LLM has already produced a plan and Snipara should enforce the workflow around it.
- Save or paste the visible plan into a Markdown/Text/JSON file.
- Run
rlm-hook workflow start --goal "<goal>" --plan-file ./plan.md. - At each phase, run
rlm-hook workflow phase-start <phase_id>, thenrlm-hook workflow run --mode full --include-session-context --query "<phase query>". - Before risky code changes, run
rlm-hook code impact --changed-files <files...> --diff-summary "<change>". For an important symbol, runrlm-hook code symbol-card --qualified-name <symbol>. - If sandboxed execution materially helps, use RLM Runtime MCP
execute_pythonfrom the AI client. - End every phase with
rlm-hook workflow phase-commit <phase_id> --summary "<outcome>" --files <files...>. - End the whole task with
rlm-hook final-commit --summary "<final outcome>" --files <files...>.
After compaction or resume, run rlm-hook workflow resume --include-session-context. The local state file tells the agent the current phase, and hosted memory contains durable phase outcomes.
snipara-companion does not execute RLM Runtime jobs itself. Runtime MCP execute_python can run
without an extra LLM provider key because your AI client supplies the reasoning; standalone
rlm run and rlm agent need an OPENAI_API_KEY or ANTHROPIC_API_KEY.
sync-documents --file accepts either a JSON array or an object with a
documents array. Object payloads can also include manifest-level metadata
defaults and workflow defaults:
{
"dryRun": true,
"reindex": true,
"metadata": {
"assetClass": "BUSINESS_DOCUMENT",
"usageMode": "current_truth",
"sourceKind": "google_drive",
"freshnessPolicy": {
"maxAgeDays": 30,
"requireSourceModifiedAt": true
}
},
"documents": [
{
"path": "docs/spec.md",
"content": "# Spec\n\n...",
"kind": "DOC",
"format": "md",
"metadata": {
"clientId": "xyz",
"sourceModifiedAt": "2026-04-25T10:20:00Z",
"sourceSnapshotAt": "2026-04-25T10:30:00Z",
"sourceContentHash": "sha256:..."
}
},
{
"path": "diagrams/network.vsdx",
"content": "base64:<payload>",
"kind": "BINARY",
"format": "vsdx",
"metadata": {
"assetClass": "DIAGRAM",
"usageMode": "historical_reference",
"sourceKind": "local_agent"
}
}
]
}When using sync-documents --dir, companion collects .md, .markdown,
.mdx, .txt, .rst, .adoc, .pdf, .docx, .pptx, .svg, and
.vsdx. Binary parser files are encoded as base64:<payload> and sent with
kind=BINARY plus the inferred format.
Use usageMode=current_truth for the active client/project source of truth,
usageMode=historical_reference for previous client deliverables that should
serve as a case library, and usageMode=template or global_knowledge for
reusable business patterns. Snipara uses this metadata in index health to
distinguish reindex, reupload, metadata review, and quality review actions.
onboard-folder is the MVP path for dashboardless business imports. Let
Claude, ChatGPT, Codex, or another agent use its own Drive, Gmail, Notion, or
local-file access to materialize a folder, then run:
rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --write-manifest ./snipara-onboard.json
rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --applyThe command scans recursively by default, skips build/cache directories,
classifies the folder as business_context, code_project, mixed, or
unknown, and adds provenance metadata such as sourceProvider,
sourceSnapshotAt, sourcePath, and sourceContentHash. It never infers a
remote URI; pass --source-uri when the source system gives you a safe
identifier. This is import-on-demand, not continuous sync. Unsupported
business-looking files such as spreadsheets are reported in the preview instead
of silently uploaded. If the folder is detected as a code repository, the
command warns instead of pretending to handle source-code onboarding; use the
GitHub OAuth/code onboarding path for that.
Dry-runs are local only: they validate payload shape, known metadata fields,
and freshness signals such as expired snapshots or changed source hashes. They
do not call hosted MCP and therefore cannot know remote created, updated,
or unchanged counts until a real sync runs.
For release-hardening and local packaging checks:
pnpm --filter snipara-companion pack:smoke
pnpm --filter create-snipara pack:smokeTo test a packed tarball manually, use npm exec --package:
npm pack
npm exec --package ./snipara-companion-1.1.10.tgz rlm-hook -- --helpDo not use npx /path/to/snipara-companion-*.tgz. npm will try to execute the tarball itself instead of
resolving the packaged rlm-hook binary.
Design rule:
- local CLI = workflow facade
- hosted Snipara = source of truth for context, chunks, plans, memory, and review policy
- use
companionfor daily coding ergonomics and auto-routing - use
orchestrateonly when the task is genuinely multi-step and exploration-heavy - use
snipara-orchestratoronly for proof-based validation, drift detection, and production gates
rlm-hook cache clear
Clear the local query cache.
rlm-hook cache clearPositioning
- Use Hosted MCP as the main Snipara agent surface.
- Use
create-sniparaas the normal setup path; it installssnipara-companionby default. - Use
hosted-onlywhen a user cannot install local helper tooling.
Related Packages
snipara-mcp- core MCP clientcreate-snipara- onboarding for Hosted MCP + companion workflows, with optional Runtimesnipara-openclaw-hooks- OpenClaw-specific automation hooks
License
MIT
