npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

snipara-companion

v1.1.10

Published

Snipara local companion CLI for hosted context, hooks, and automation workflows

Downloads

1,920

Readme

snipara-companion

Local helper CLI for Snipara agent workflows.

snipara-companion adds local diagnostics, hooks, folder onboarding, workflow helpers, and command-line access around Snipara Hosted MCP. It complements the hosted context and memory surface; it is not the primary runtime for agents.

In this repository, the source currently lives in packages/cli, and the installed executable is rlm-hook.

This package complements snipara-mcp. It does not replace it.

flowchart LR
    Project["Local project"] --> Companion["rlm-hook"]
    Companion --> Diagnostics["doctor, repair, sync, workflow helpers"]
    Companion --> Hosted["Snipara Hosted MCP / API"]
    Hosted --> Agents["Codex, Claude Code, Cursor, ChatGPT"]

When To Use It

| If you need... | Install... | | --------------------------------------------------------- | ------------------------ | | MCP tools, OAuth login, project-scoped context and memory | snipara-mcp | | One-command Hosted MCP + companion setup | create-snipara | | Local helper CLI and hook-oriented automation | snipara-companion | | OpenClaw-specific automation hooks | snipara-openclaw-hooks |

Codex Note

For Codex, the primary integration remains Hosted MCP plus AGENTS.md.

  • create-snipara installs snipara-companion by default for managed workflow commands.
  • snipara-companion is still skippable with --profile hosted-only or --skip-companion.
  • This package does not currently scaffold a Codex-specific preset.
  • Use it when compaction-safe phase commits, local doctor checks, or shared helper workflows are useful.

Installation

npm install -g snipara-companion
# or
pnpm add -g snipara-companion
# or
yarn global add snipara-companion

Installed Command

rlm-hook

New In 1.1.10

  • Runtime guidance now points existing projects to npx create-snipara repair --with-runtime
  • Managed workflow phases marked needs_runtime suggest Runtime installation only when needed

New In 1.1.4

  • rlm-hook onboard-folder previews and applies dashboardless business-folder imports from local or LLM-materialized sources
  • rlm-hook workflow start/status/resume/phase-start/phase-commit keeps a visible LLM plan in .snipara/workflow/current.json and persists each phase through hosted memory so compacted agents can resume safely
  • rlm-hook final-commit persists the final workflow outcome with rlm_end_of_task_commit
  • rlm-hook code symbol-card and rlm-hook code impact expose paid Context safeguards directly from the companion CLI

New In 1.1.2

  • rlm-hook doctor and Runtime hints detect provider keys from local .env files without printing secret values

New In 1.1.1

  • rlm-hook doctor diagnoses Snipara auth, RLM Runtime, Runtime MCP, provider keys, and Docker
  • workflow run prints contextual RLM Runtime hints for full/orchestrated/execution-heavy work
  • workflow run --no-runtime-hint hides Runtime guidance for scripted terminal output

New In 1.1.0

  • business-collections commands for Team Business Context presets and reusable business docs
  • client-projects commands for creating and listing project-scoped client context workspaces
  • upload --metadata/--metadata-file plus convenience metadata flags for single-file business/client uploads

New In 1.0.0

  • direct rlm-hook code access for callers, imports, neighbors, and shortest-path
  • workflow run --mode auto|full|orchestrate for hosted-first workflow routing
  • rlm-hook shared-context for project-linked standards and team guidance
  • automatic fallback to project token auth when a stale SNIPARA_API_KEY overrides a valid local login

Supported Client Presets Today

The built-in init flow currently supports:

  • claude-code
  • cursor
  • windsurf

Quick Start

Claude Code

rlm-hook init --with-hooks --client claude-code

Cursor

rlm-hook init --with-hooks --client cursor

Windsurf

rlm-hook init --with-hooks --client windsurf

Commands

rlm-hook init

Initialize local configuration and optionally generate client hook files.

rlm-hook init

Options:

  • --api-key <key> - Skip prompt for API key
  • --project-id <id> - Skip prompt for project ID
  • --client <client> - claude-code, cursor, or windsurf
  • --with-hooks - Install hooks automatically
  • --force - Overwrite existing generated files
  • --dir <directory> - Target directory for generated files

rlm-hook config

Show the current configuration.

rlm-hook config

rlm-hook pre-tool

Resolve a query from tool input and fetch relevant context.

rlm-hook pre-tool '{"path":"/src/api/auth.ts"}'

rlm-hook post-tool

Track file access for the current session.

rlm-hook post-tool '{"file_path":"/src/api/auth.ts"}'

rlm-hook session-end

Persist the current session.

rlm-hook session-end

rlm-hook session status

Show current session information.

rlm-hook session status

rlm-hook session reset

Start a new session ID locally.

rlm-hook session reset

rlm-hook emit-event

Forward a canonical lifecycle event into Snipara's hosted automation API.

rlm-hook emit-event \
  --event-type tool_call \
  --payload '{"hook":"pre-tool","tool":"Read","query":"auth middleware"}'

Use this when a thin local adapter needs to report lifecycle activity without owning durable memory policy locally.

Workflow Commands

These are thin local wrappers around hosted Snipara workflows:

npx -y snipara-companion@latest workflow run --mode auto --query "who imports src.rlm_engine"
npx -y snipara-companion@latest workflow run --mode full --include-session-context --query "plan the auth refactor"
npx -y snipara-companion@latest task-commit --summary "Shipped auth refactor" --files apps/web/src/lib/auth.ts

rlm-hook query --query "auth middleware"
rlm-hook query --query "who calls src.rlm_engine.RLMEngine._handle_context_query" --follow-recommendation
rlm-hook workflow start --goal "ship auth hardening" --plan-file ./plan.md
rlm-hook workflow status
rlm-hook workflow resume --include-session-context
rlm-hook workflow phase-start context
rlm-hook workflow run --mode auto --query "who imports src.rlm_engine"
rlm-hook workflow run --mode full --include-session-context --query "plan the auth refactor"
rlm-hook workflow run --mode full --no-runtime-hint --query "plan the auth refactor"
rlm-hook workflow phase-commit context --summary "Loaded context and mapped impacted files" --files src/auth.ts
rlm-hook workflow final-commit --summary "Shipped auth hardening and tests" --files src/auth.ts tests/auth.test.ts
rlm-hook final-commit --summary "Shipped auth hardening and tests" --files src/auth.ts tests/auth.test.ts
rlm-hook doctor
rlm-hook doctor --json
rlm-hook code callers --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code imports --file-path src/rlm_engine.py
rlm-hook code neighbors --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code shortest-path --from src.rlm_engine.RLMEngine._handle_multi_query.execute_single_query --to src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code symbol-card --qualified-name src.rlm_engine.RLMEngine._handle_context_query
rlm-hook code impact --changed-files apps/web/src/lib/auth.ts tests/auth.test.ts --diff-summary "auth hardening"
rlm-hook plan --query "implement OAuth device flow"
rlm-hook upload --path docs/spec.md --file ./docs/spec.md
rlm-hook upload --path clients/acme/current.md --file ./current.md --asset-class BUSINESS_DOCUMENT --usage-mode current_truth --source-kind local_agent --client-id acme
rlm-hook upload --path diagrams/network.vsdx --file ./diagrams/network.vsdx --kind BINARY --format vsdx --reindex
rlm-hook upload --path docs/spec.md --file ./docs/spec.md --reindex
rlm-hook business-collections list
rlm-hook business-collections ensure --preset business_response_playbook
rlm-hook business-collections ensure --preset offer_templates
rlm-hook business-collections upload --preset offer_templates --title "Standard Offer Structure" --file ./offer-template.md
rlm-hook client-projects list
rlm-hook client-projects create --name "ACME Network Refresh" --slug acme-network-refresh
rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --write-manifest ./snipara-onboard.json
rlm-hook onboard-folder ./client-export --source-provider claude_notion --apply
rlm-hook sync-documents --dir ./docs --recursive --prefix docs --reindex
rlm-hook sync-documents --file ./snipara-documents.json --delete-missing --reindex
rlm-hook sync-documents --file ./snipara-business-context.json --dry-run --json
rlm-hook reindex --kind doc --mode incremental
rlm-hook reindex --job-id index_job_123
rlm-hook business-health --json
rlm-hook chunk get --chunk-id chunk_123
rlm-hook multi-query --queries "auth flow" "rate limiting"
rlm-hook orchestrate --query "understand the auth architecture"
rlm-hook load-document --path docs/auth.md
rlm-hook recall --query "What did we decide about auth retries?" --type decision
rlm-hook events recent --limit 20
rlm-hook session-bootstrap --max-critical-tokens 2000
rlm-hook session-bootstrap --include-session-context --max-context-tokens 1000
rlm-hook task-commit --summary "Shipped event ingestion and dashboard inspection" --files apps/web/src/components/automation/automation-settings-panel.tsx

The installed executable is rlm-hook. npx snipara-companion@latest ... is a convenience shortcut for the same package; there is no separate snipara-workflow binary. snipara-companion does not execute RLM Runtime jobs itself. Runtime MCP execute_python can run without an extra LLM provider key because your AI client supplies the reasoning; standalone rlm run and rlm agent need an OPENAI_API_KEY or ANTHROPIC_API_KEY. For diagnostics and Runtime hints, companion also detects these keys in local .env, .env.local, .env.development, and .env.development.local files without printing their values.

By default these commands print human-readable terminal output. Add --json when you want the raw hosted response.

Context vs Memory

  • Use rlm-hook query, shared-context, and load-document for source truth.
  • Use rlm-hook recall, session-bootstrap, and task-commit for durable memory.
  • Do not use memory as a substitute for document retrieval.
  • Do not upload specs or raw documents into memory.

Semantics:

  • rlm-hook query --follow-recommendation = execute the hosted recommended structural tool instead of only printing it
  • rlm-hook workflow run --mode auto = context query plus automatic rlm_code_* follow-up when Snipara recommends one
  • rlm-hook workflow run --mode full = session bootstrap + context query + automatic structural follow-up + hosted plan
  • rlm-hook workflow run --mode orchestrate = explicit hosted orchestrator flow for deeper multi-step exploration
  • rlm-hook workflow run = suggests RLM Runtime when the query calls for validation, execution, data transforms, or heavier FULL/orchestrated work
  • rlm-hook workflow start --plan-file = records the visible LLM plan locally so phase state survives agent compaction
  • rlm-hook workflow phase-start = marks the current phase and prints the required Snipara context gate
  • rlm-hook workflow phase-commit = calls hosted rlm_end_of_task_commit for that phase, updates local state, and advances the next phase
  • rlm-hook workflow resume = reloads local workflow state plus hosted durable/session memory after compaction or resume
  • rlm-hook final-commit / workflow final-commit = final hosted commit for the managed workflow
  • rlm-hook code symbol-card = direct paid Context rlm_code_symbol_card for an important symbol before editing
  • rlm-hook code impact = direct paid Context rlm_code_impact for changed files, a file, or a symbol before risky changes
  • rlm-hook doctor = local readiness check for Snipara auth, RLM Runtime, Runtime MCP wiring, provider keys, and Docker
  • rlm-hook upload --metadata/--metadata-file = single-file upload with the same business/client metadata fields supported by bulk sync
  • rlm-hook business-collections = manage reusable Team Business Context collections (Business Response Playbook, Business Library, Offer Templates, Company Presentations, Reference Diagrams)
  • rlm-hook client-projects = create/list project-scoped client context workspaces before uploading current client files
  • rlm-hook onboard-folder = business-first import for a local or LLM-materialized folder; it still detects code/mixed folders, but code repositories should use the GitHub OAuth/code onboarding path
  • rlm-hook sync-documents = bulk rlm_sync_documents for text and supported binary parser documents from a JSON payload or directory
  • rlm-hook sync-documents --dry-run = validate the local payload and business-context freshness metadata without uploading
  • rlm-hook business-health = hosted rlm_index_health, with the business_context section surfaced for stale/reupload signals
  • rlm-hook reindex = trigger or poll hosted rlm_reindex; use after uploads when immediate chunk availability matters
  • rlm-hook code * = direct access to the code graph tools without routing through rlm_context_query
  • rlm-hook recall = direct durable memory lookup for decisions, learnings, preferences, and carryover
  • rlm-hook session-bootstrap = durable memory first, optional weak session carryover second
  • rlm-hook task-commit = durable outcomes only
  • --max-daily-tokens is still accepted as a compatibility alias for --max-context-tokens

Compaction-Safe LLM Plan Workflow

Use this when the user's LLM has already produced a plan and Snipara should enforce the workflow around it.

  1. Save or paste the visible plan into a Markdown/Text/JSON file.
  2. Run rlm-hook workflow start --goal "<goal>" --plan-file ./plan.md.
  3. At each phase, run rlm-hook workflow phase-start <phase_id>, then rlm-hook workflow run --mode full --include-session-context --query "<phase query>".
  4. Before risky code changes, run rlm-hook code impact --changed-files <files...> --diff-summary "<change>". For an important symbol, run rlm-hook code symbol-card --qualified-name <symbol>.
  5. If sandboxed execution materially helps, use RLM Runtime MCP execute_python from the AI client.
  6. End every phase with rlm-hook workflow phase-commit <phase_id> --summary "<outcome>" --files <files...>.
  7. End the whole task with rlm-hook final-commit --summary "<final outcome>" --files <files...>.

After compaction or resume, run rlm-hook workflow resume --include-session-context. The local state file tells the agent the current phase, and hosted memory contains durable phase outcomes.

snipara-companion does not execute RLM Runtime jobs itself. Runtime MCP execute_python can run without an extra LLM provider key because your AI client supplies the reasoning; standalone rlm run and rlm agent need an OPENAI_API_KEY or ANTHROPIC_API_KEY.

sync-documents --file accepts either a JSON array or an object with a documents array. Object payloads can also include manifest-level metadata defaults and workflow defaults:

{
  "dryRun": true,
  "reindex": true,
  "metadata": {
    "assetClass": "BUSINESS_DOCUMENT",
    "usageMode": "current_truth",
    "sourceKind": "google_drive",
    "freshnessPolicy": {
      "maxAgeDays": 30,
      "requireSourceModifiedAt": true
    }
  },
  "documents": [
    {
      "path": "docs/spec.md",
      "content": "# Spec\n\n...",
      "kind": "DOC",
      "format": "md",
      "metadata": {
        "clientId": "xyz",
        "sourceModifiedAt": "2026-04-25T10:20:00Z",
        "sourceSnapshotAt": "2026-04-25T10:30:00Z",
        "sourceContentHash": "sha256:..."
      }
    },
    {
      "path": "diagrams/network.vsdx",
      "content": "base64:<payload>",
      "kind": "BINARY",
      "format": "vsdx",
      "metadata": {
        "assetClass": "DIAGRAM",
        "usageMode": "historical_reference",
        "sourceKind": "local_agent"
      }
    }
  ]
}

When using sync-documents --dir, companion collects .md, .markdown, .mdx, .txt, .rst, .adoc, .pdf, .docx, .pptx, .svg, and .vsdx. Binary parser files are encoded as base64:<payload> and sent with kind=BINARY plus the inferred format.

Use usageMode=current_truth for the active client/project source of truth, usageMode=historical_reference for previous client deliverables that should serve as a case library, and usageMode=template or global_knowledge for reusable business patterns. Snipara uses this metadata in index health to distinguish reindex, reupload, metadata review, and quality review actions.

onboard-folder is the MVP path for dashboardless business imports. Let Claude, ChatGPT, Codex, or another agent use its own Drive, Gmail, Notion, or local-file access to materialize a folder, then run:

rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --write-manifest ./snipara-onboard.json
rlm-hook onboard-folder ./client-export --source-provider chatgpt_drive --apply

The command scans recursively by default, skips build/cache directories, classifies the folder as business_context, code_project, mixed, or unknown, and adds provenance metadata such as sourceProvider, sourceSnapshotAt, sourcePath, and sourceContentHash. It never infers a remote URI; pass --source-uri when the source system gives you a safe identifier. This is import-on-demand, not continuous sync. Unsupported business-looking files such as spreadsheets are reported in the preview instead of silently uploaded. If the folder is detected as a code repository, the command warns instead of pretending to handle source-code onboarding; use the GitHub OAuth/code onboarding path for that.

Dry-runs are local only: they validate payload shape, known metadata fields, and freshness signals such as expired snapshots or changed source hashes. They do not call hosted MCP and therefore cannot know remote created, updated, or unchanged counts until a real sync runs.

For release-hardening and local packaging checks:

pnpm --filter snipara-companion pack:smoke
pnpm --filter create-snipara pack:smoke

To test a packed tarball manually, use npm exec --package:

npm pack
npm exec --package ./snipara-companion-1.1.10.tgz rlm-hook -- --help

Do not use npx /path/to/snipara-companion-*.tgz. npm will try to execute the tarball itself instead of resolving the packaged rlm-hook binary.

Design rule:

  • local CLI = workflow facade
  • hosted Snipara = source of truth for context, chunks, plans, memory, and review policy
  • use companion for daily coding ergonomics and auto-routing
  • use orchestrate only when the task is genuinely multi-step and exploration-heavy
  • use snipara-orchestrator only for proof-based validation, drift detection, and production gates

rlm-hook cache clear

Clear the local query cache.

rlm-hook cache clear

Positioning

  • Use Hosted MCP as the main Snipara agent surface.
  • Use create-snipara as the normal setup path; it installs snipara-companion by default.
  • Use hosted-only when a user cannot install local helper tooling.

Related Packages

  • snipara-mcp - core MCP client
  • create-snipara - onboarding for Hosted MCP + companion workflows, with optional Runtime
  • snipara-openclaw-hooks - OpenClaw-specific automation hooks

License

MIT