npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-memori-community

v0.2.0

Published

n8n community node exposing a Memori-aware Chat Model sub-node for AI Agent. Injects memori_attribution (entity_id, process_id, session_id) into every chat completion request so a self-hosted Memori proxy can partition knowledge per user/session.

Readme

n8n-nodes-memori-community

npm version CI License: MIT

An n8n community node that exposes a Memori Chat Model sub-node for the AI Agent.

Memori is an open-source memory layer for LLMs. When fronted as an OpenAI-compatible proxy it partitions knowledge per entity (end-user), process (application) and session — but only if the client attaches those identifiers on every request. n8n's built-in OpenAI Chat Model has no UI for that, so this package ships a drop-in replacement that does.

What it does

Behaves like the built-in OpenAI Chat Model sub-node, plus three required fields — Entity ID, Process ID, Session ID — which are injected into every outgoing chat completion request as a top-level memori_attribution object.

Request shape

Outgoing requests carry the attribution in both the body (as memori_attribution) and as HTTP headers (X-Memori-*), so a self-hosted Memori build can read whichever channel it prefers:

POST /v1/chat/completions HTTP/1.1
Authorization: Bearer <key>
Content-Type: application/json
X-Memori-Entity-Id: <userId>
X-Memori-Process-Id: my_n8n_agent
X-Memori-Session-Id: <sessionId>

{
  "model": "gpt-4o-mini",
  "messages": [
    { "role": "system", "content": "..." },
    { "role": "user",   "content": "..." }
  ],
  "temperature": 0.7,
  "stream": false,
  "memori_attribution": {
    "entity_id":  "<userId>",
    "process_id": "my_n8n_agent",
    "session_id": "<sessionId>"
  }
}

Your Memori proxy reads attribution from whichever channel it prefers, records/retrieves memory for that partition, and forwards the (possibly memory-augmented) request to the upstream model.

Prerequisites

You need a self-hosted Memori instance with the OpenAI-compatibility layer enabled. This node is only the client side — it sends memori_attribution-stamped requests to an OpenAI-compatible endpoint, but the endpoint itself is yours to run.

Your Memori build must expose at least:

  • POST /v1/chat/completions — OpenAI-compatible chat completions (with Authorization: Bearer <key> auth, and acceptance of the top-level memori_attribution object).
  • GET /v1/models — the model-list endpoint used to populate the Model dropdown at edit time.

For the OpenAPI schema Memori actually serves, hit /docs on your running instance (e.g. http://<your-memori-host>:8012/docs).

Not a target: hosted Memori Cloud. The public Memori product at memorilabs.ai is an SDK-wrapper architecture (Memori().llm.register(client)), plus an MCP server at https://api.memorilabs.ai/mcp/ that uses X-Memori-API-Key auth. It does not expose the OpenAI-compatible chat-completions proxy this node points at. MemoriLabs is building the official n8n MCP integration for that path.

Self-hosting a Memori proxy

This node speaks OpenAI-compatible HTTP, but the upstream MemoriLabs/Memori project is a Python SDK, not a server. To bridge them you run a small FastAPI app that wraps the SDK. A starter gist is available with five files (main.py, requirements.txt, Dockerfile, docker-compose.yml, .env.example):

👉 gist.github.com/mheland/550e5263cd33558ff1acdadf54870abc

  1. Clone the gist into an empty directory.
  2. Provision a Postgres 14+ database; put its URL into .env as MEMORI_POSTGRES_URL. Memori auto-creates its schema on first run.
  3. Set the rest of .env: MEMORI_PROXY_API_KEY (any long random string — clients send this as Authorization: Bearer …), OPENAI_API_KEY, and UPSTREAM_BASE_URL if you're routing through something other than OpenAI direct.
  4. docker compose up -d --build.
  5. curl -s http://localhost:8012/health{"status":"healthy"}.
  6. In n8n, install n8n-nodes-memori-community, create a Memori API credential with Base URL http://<host>:8012/v1 and the proxy key, and add a Memori Chat Model to your Agent.

The starter is intentionally minimal — within-session memory only. Cross-session fact recall, post-stream augmentation, and gpt-5 / o-series param compat are layered on top in production deployments.

Install

In self-hosted n8n: Settings → Community Nodes → Install → enter n8n-nodes-memori-communityInstall.

Note: This package depends on @langchain/openai, which makes it ineligible for n8n Cloud's community-node verification. It targets self-hosted n8n.

Configure

  1. Create a Memori API credential (installed by this package). Fill:
    • API Key — whatever your Memori instance expects on Authorization: Bearer <key>
    • Base URL — must point at the OpenAI-compatible root on your Memori instance and include the version segment, e.g. https://<your-memori-host>/v1.
  2. Add an AI Agent node. Click the language-model socket and pick Memori Chat Model.
  3. Fill the fields:

| Field | Example | Notes | |---------------|--------------------------------------------------------|-------------------------------------------------------| | Model | pick from dropdown | Loaded live from {baseUrl}/models. Switch to ID mode for aliases not in the list. | | Entity ID | ={{$json.userId}} | Usually the end-user. Expressions supported. | | Process ID | my_n8n_agent | Logical app/process name. Static per workflow is fine.| | Session ID | ={{ $json.sessionId ?? $json.userId + '_web' }} | Conversation identifier. Expressions supported. |

Optional fields under Options: Base URL override, Sampling Temperature, Maximum Number of Tokens, Timeout, Max Retries.

Streaming

The node doesn't hard-code stream. Whether stream: true is sent to Memori depends on how the AI Agent invokes the model:

  • Chat Trigger with Response Mode = "Streaming" → the AI Agent calls model.stream(...), OpenAI SDK flips to stream: true, Memori streams SSE back, n8n forwards tokens to the client. ✅
  • Webhook → AI Agent → Respond to Webhook (default) → non-streaming; agent collects the full completion and returns it in one shot.

How it works

The three attribution values ride on two channels so a self-hosted Memori build can read whichever it prefers:

  • BodymodelKwargs.memori_attribution on LangChain.js ChatOpenAI serializes as a top-level key in the JSON body.
  • Headersconfiguration.defaultHeaders adds X-Memori-Entity-Id / X-Memori-Process-Id / X-Memori-Session-Id to every request.
new ChatOpenAI({
  apiKey, model,
  configuration: {
    baseURL,
    defaultHeaders: {
      'X-Memori-Entity-Id': entityId,
      'X-Memori-Process-Id': processId,
      'X-Memori-Session-Id': sessionId,
    },
  },
  modelKwargs: {
    memori_attribution: { entity_id, process_id, session_id },
  },
});

A small fetch wrapper strips LangChain-injected defaults (top_p, n, presence_penalty, frequency_penalty) from outgoing bodies and recomputes Content-Length, so the node works cleanly against both OpenAI-backed and Anthropic-backed models routed through Memori (otherwise Anthropic rejects temperature + top_p together).

Relevant discussion in the n8n community: https://community.n8n.io/t/openai-chat-model-support-for-extra-body-option-please/65574.

Development

git clone https://github.com/the-automagicians/memori-ai-model.git
cd memori-ai-model
npm install

npm run dev          # spins up a local n8n with the node pre-installed + live reload
npm run build        # one-shot TypeScript build + asset copy (@n8n/node-cli)
npm run build:watch  # incremental TypeScript rebuild
npm run lint
npm run lint:fix

npm run dev is the fastest inner loop: it starts a sub-process n8n at http://localhost:5678 with the node auto-installed into ~/.n8n-node-cli, and rebuilds on save.

Repo layout

credentials/
  MemoriApi.credentials.ts   # Memori API credential type
  memori.svg
nodes/
  LmChatMemori/
    LmChatMemori.node.ts     # the sub-node
    memori.svg
.github/workflows/
  ci.yml                     # lint + build on PRs and main
  publish.yml                # publishes to npm on v*.*.* tags

Release process

  1. Bump version in package.json.
  2. Commit, git tag -a vX.Y.Z -m "...", push the commit and the tag.
  3. publish.yml runs lint + build, then npm publish --provenance.

Publishing uses npm Trusted Publishing (OIDC) when configured on the package page; otherwise falls back to NPM_TOKEN.

Limitations

  • Self-hosted n8n only. Depends on @langchain/openai, so the package cannot be verified for n8n Cloud.
  • Body + headers only — no query-param or payload-envelope support. If a future Memori contract adds more signals, extend configuration.defaultHeaders / modelKwargs in supplyData.
  • No Responses API or built-in tools (code interpreter, web search, etc.). Kept minimal by design.

License

MIT © the-automagicians