n8n-nodes-unify-llm
v0.1.3
Published
n8n community node for multi-provider LLM orchestration via Unify LLM (OpenAI, Anthropic, Gemini, Ollama).
Maintainers
Readme
n8n-nodes-unify-llm
Use the full @atom8ai/unify-llm stack inside n8n with a LangChain-style experience:
- Multi-provider inference via connected AI model sub-nodes
- Structured output with JSON schema
- Guarded generation checks
- Runnable chain-style orchestration
- Ensemble router intelligence (
BayesianUtilityRouter,ParetoNavigatorRouter,PrimRouter) - Persistent vector retrieval with local cache, Qdrant, or Pinecone backends
Included
Unify LLMnode with resources:OrchestrationVector Store
- Provider credential nodes (for use by model sub-nodes):
Unify LLM OpenAI APIUnify LLM Anthropic APIUnify LLM Gemini APIUnify LLM Ollama API
Operations
Orchestration
GenerateGenerate Structured(Schema JSON)Guarded Generate(advanced runtime options)Chain GenerateChain StructuredQuickstart AskRoute OnlyRouted Generate
Vector Store
Upsert Documents(persist vectors/documents into selected backend)Similarity Search(query against previously persisted vectors)
Supported vector backends:
Persistent Local Cache(workflow static data namespace)Qdrant(REST API)Pinecone(REST API)
Build
npm installnpm run buildnpm pack
Publish to npm (community-node ready)
Before publishing:
- Ensure you are authenticated (
npm whoami). - Run quality checks (
npm run lint,npm run build). - Preview package contents (
npm pack --dry-run) and confirmdist/nodes/UnifyLlm/unifyLlm.svgis included.
Publish:
- First release (unscoped):
npm publish --access public - Recommended CI release with provenance:
npm publish --access public --provenance
After publishing:
- Verify package page:
https://www.npmjs.com/package/n8n-nodes-unify-llm - Submit for verification in n8n Creator Portal:
https://creators.n8n.io/nodes
Notes
- Connect an AI model node to the
Unify LLMChat Model input (ai_languageModel). - Provider credentials are configured on the connected model node, not on the
Unify LLMroot node. - If multiple models are connected, router operations can use ordered model inputs.
- For vector workflows at scale, run
Vector Store > Upsert Documentsfirst and thenVector Store > Similarity Search. Persistent Local Cachestores vectors in workflow static data, whileQdrantandPineconeuse their respective remote indexes.
