n8n-nodes-minimax
v0.6.0
Published
n8n community node for MiniMax AI — chat completions powered by MiniMax-M2.5, M2.1 and M2 models
Maintainers
Readme
n8n-nodes-minimax
This is an n8n community node that lets you use MiniMax AI in your n8n workflows.
MiniMax provides powerful large language models including MiniMax-M2.5, M2.1, M2 and M2-her for chat completion, reasoning, code generation and roleplay scenarios.
n8n is a fair-code licensed workflow automation platform.
Installation
Follow the installation guide in the n8n community nodes documentation.
npm package: n8n-nodes-minimax
Nodes
This package provides two nodes:
| Node | Type | Description | |------|------|-------------| | MiniMax | Action node | Standalone chat completion node for direct API calls in workflows | | MiniMax Chat Model | AI sub-node | Language model node that plugs into n8n AI Agent, AI Chain and other AI nodes |
Credentials
- Create an API Key from the MiniMax platform:
- International: platform.minimax.io
- China (中国版): platform.minimaxi.com
- In n8n, create a new MiniMax API credential.
- Choose your API Type (4 options):
| API Type | Endpoint | Billing | Use Case |
|----------|----------|---------|----------|
| International — Standard | api.minimax.io | Per token | Global users, pay-as-you-go |
| International — Coding Plan | api.minimax.io/anthropic | Subscription | Global users, AI coding tools |
| 中国版 — Standard | api.minimaxi.com | Per token | China mainland users, pay-as-you-go |
| 中国版 — Coding Plan | api.minimaxi.com/anthropic | Subscription | China mainland users, AI coding tools |
- Paste your API key.
- (Standard only) Optionally enter your Group ID for legacy v1 API calls.
Important:
- International and China editions use different domains and different API keys — they are NOT interchangeable.
- Standard and Coding Plan use different API keys — they are NOT interchangeable either.
Supported Models
| Model | Description | |-------|-------------| | MiniMax-M2.5 | Peak performance, ~60 tokens/sec | | MiniMax-M2.5 Highspeed | Same M2.5 performance, ~100 tokens/sec | | MiniMax-M2.1 | Enhanced programming capabilities | | MiniMax-M2.1 Highspeed | Faster variant, ~100 tokens/sec | | MiniMax-M2 | Agentic capabilities, advanced reasoning | | M2-her | Multi-character roleplay & dialogue |
Usage
MiniMax (Action Node)
Use this node for standalone API calls in a workflow:
- Add a MiniMax node to your workflow.
- Select Chat > Complete as the resource and operation.
- Choose a model and add your prompt messages.
- Optionally configure temperature, top_p, max_tokens and other parameters.
MiniMax Chat Model (AI Sub-node)
Use this node as a language model inside AI Agent, AI Chain or other AI nodes:
- Add an AI Agent (or AI Chain) node to your workflow.
- Click the Model slot and select MiniMax Chat Model.
- Configure your MiniMax API credentials and choose a model.
- The model will be used by the Agent for reasoning, tool calling and response generation.
Tip: MiniMax-M2.5 and M2 support function calling / tool use, making them work well with AI Agents that need to invoke tools (e.g. MCP, HTTP, code execution).
Options (Action Node)
| Option | Standard | Coding Plan | Description | |--------|:---:|:---:|-------------| | Max Tokens | ✅ | ✅ | Limit the length of generated text | | Temperature | ✅ | ✅ | Control randomness (0–2) | | Top P | ✅ | ✅ | Nucleus sampling diversity control | | Frequency Penalty | ✅ | — | Reduce repetition of tokens | | Presence Penalty | ✅ | — | Encourage exploring new topics | | Response Format | ✅ | — | Text or JSON Object output | | Simplify | ✅ | ✅ | Return only essential message content |
Resources
- International
- China (中国版)
- n8n Community Nodes Documentation
