n8n-nodes-openclaw-chat
v1.0.8
Published
n8n node to interact with OpenClaw Gateway HTTP API (chat completions, embeddings, models, responses, tools)
Maintainers
Readme
n8n Node for OpenClaw Gateway HTTP API
This n8n community node provides integration with OpenClaw Gateway HTTP API, enabling you to interact with OpenClaw agents directly from your n8n workflows. It supports chat completions, embeddings, model listing, OpenResponses, and direct tool invocation.
Features
- Chat Completions: Standard OpenAI-compatible chat completions with streaming support
- Embeddings: Generate embeddings for text inputs
- Model Listing: List available OpenClaw models/agents
- OpenResponses: Native OpenClaw responses with file uploads, tools, and session management
- Tool Invocation: Direct tool calls (memory search, session listing, etc.)
- Flexible Authentication: Support for multiple OpenClaw Gateway instances
- Error Handling: Comprehensive error handling with informative messages
Installation
Method 1: Community Nodes (Recommended)
- Go to Settings → Community Nodes in your n8n instance
- Click "Install"
- Enter
n8n-nodes-openclaw-chatand click "Install"
Method 2: Manual Installation
cd ~/.n8n/nodes
npm install n8n-nodes-openclaw-chatMethod 3: Docker Deployment
Add to your docker-compose.yml:
volumes:
- ~/.n8n/nodes:/home/node/.n8n/nodes
environment:
- N8N_CUSTOM_EXTENSIONS=n8n-nodes-openclaw-chatConfiguration
1. Create OpenClaw API Credentials
- In your n8n workflow, add an OpenClaw Chat node
- Click the "Create New Credential" button
- Select "OpenClaw API" credential type
- Fill in the following fields:
| Field | Description | Example |
|-------|-------------|---------|
| Base URL | Your OpenClaw Gateway URL | http://localhost:18789 or https://agent1.apipie.ai |
| API Token | Bearer token from OpenClaw Gateway | 82af605cb191558a24874cdc5983cd07bdd40819723920cb2502f4bd456bc026 |
| Default Agent ID | Default agent for chat completions | default, main, webchat |
| Allow Insecure HTTPS | Allow self-signed certificates (dev only) | false |
Security Note: Keep your API token secure. Never commit it to version control.
2. Get Your OpenClaw Gateway Token
- Local OpenClaw: Check
~/.openclaw/openclaw.jsonforgateway.auth.token - OpenClaw Cloud: Available in your instance dashboard
- Generate new token: Run
openclaw gateway token generate
Node Operations
📨 Chat Completion
Endpoint: POST /v1/chat/completions
Create chat completions with OpenAI-compatible API.
Parameters:
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| Model | String | Yes | openclaw/default | Model identifier (openclaw/<agentId>) |
| Messages | Array | Yes | - | Conversation history with role/content |
| Stream | Boolean | No | false | Enable streaming response (SSE) |
| Max Tokens | Number | No | 4096 | Maximum tokens to generate |
| Temperature | Number | No | 1 | Sampling temperature (0-2) |
| Top P | Number | No | 1 | Nucleus sampling (0-1) |
| Frequency Penalty | Number | No | 0 | Frequency penalty (-2 to 2) |
| Presence Penalty | Number | No | 0 | Presence penalty (-2 to 2) |
| Stop Sequences | String | No | - | Sequences to stop generation |
| User | String | No | - | End-user identifier |
Example Workflow:
Manual Trigger → OpenClaw Chat (Chat Completion)Node Configuration:
- Resource:
Chat Completion - Operation:
Create - Model:
openclaw/default - Messages:
- Role:
system, Content:You are a helpful assistant - Role:
user, Content:What skills are installed?
- Role:
- Temperature:
0.7
🔤 Embeddings
Endpoint: POST /v1/embeddings
Generate embeddings for input text.
Parameters:
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| Model | String | Yes | openclaw/default | Embedding model |
| Input | String | Yes | - | Text or array of texts to embed |
| Encoding Format | Options | No | float | float or base64 |
| User | String | No | - | End-user identifier |
Example Workflow:
Webhook → OpenClaw Chat (Embeddings) → Vector Database📋 Model Listing
Endpoint: GET /v1/models
List available OpenClaw models and agents.
Example Workflow:
Schedule Trigger → OpenClaw Chat (Model List) → Google Sheets🚀 OpenResponses
Endpoint: POST /v1/responses
Native OpenClaw responses with advanced features (files, tools, session persistence).
Parameters:
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| Model | String | Yes | openclaw | Model identifier |
| Input | String/Array | Yes | - | Text, message array, or structured input |
| Stream | Boolean | No | false | Enable streaming response |
| Tools | JSON | No | - | Tool definitions array |
| User | String | No | - | Stable session identifier |
| Previous Response ID | String | No | - | Continue previous conversation |
Example Workflow:
Email Attachment → File Parse → OpenClaw Chat (Response) → Send Answer🛠️ Tool Invocation
Endpoint: POST /tools/invoke
Direct tool invocation without full agent turn.
Parameters:
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| Tool Name | String | Yes | - | Tool name (e.g., memory_search, sessions_list) |
| Action | String | No | - | Tool action (e.g., json) |
| Arguments | JSON | No | {} | Tool arguments |
Example Workflow:
Manual Trigger → OpenClaw Chat (Tool) → Filter → Display ResultsSupported Tools (subject to gateway policy):
memory_search- Search memory filessessions_list- List active sessionscron_list- List cron jobsgateway_status- Gateway health check- ...and other non-destructive tools
Advanced Usage
Streaming Responses
Enable streaming for real-time token delivery:
- Select "Create Streaming" operation for Chat Completion or Response resources
- Connect to "Split In Batches" node to process chunks
- Use "Code" node to assemble final response
Session Management
Use the user field to maintain conversation context across nodes:
{
"user": "customer-12345",
"input": "Follow up on my previous question"
}File Uploads (OpenResponses)
Use OpenResponses endpoint with base64-encoded files:
{
"input": [
{
"type": "message",
"role": "user",
"content": "Summarize this PDF"
},
{
"type": "input_file",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": "BASE64_DATA",
"filename": "document.pdf"
}
}
]
}Tool Definitions (OpenResponses)
Define client tools for agent use:
{
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
}
]
}Example Workflows
1. Customer Support Bot
Webhook (Customer Question)
↓
OpenClaw Chat (Chat Completion)
↓
IF (Needs Human)
├→ Slack Notification
└→ Ticket Creation
↓
OpenClaw Chat (Response with Tools)
↓
Send Email Response2. Document Processing Pipeline
Google Drive (New File)
↓
Convert to Text
↓
OpenClaw Chat (Embeddings)
↓
Qdrant Vector Store
↓
OpenClaw Chat (Response with RAG)
↓
Save to Database3. Automated Monitoring
Schedule Trigger (Every hour)
↓
OpenClaw Chat (Tool: sessions_list)
↓
Filter (Inactive > 24h)
↓
IF (Found)
└→ Slack AlertError Handling
The node includes comprehensive error handling:
| Error Type | Description | Resolution | |------------|-------------|------------| | Authentication Failed | Invalid API token or base URL | Verify credentials, check gateway status | | Endpoint Not Found | API endpoint disabled | Enable endpoint in gateway config | | Rate Limited | Too many requests | Add wait node, implement backoff | | Invalid Input | Malformed request body | Check parameter formatting | | Tool Denied | Tool blocked by gateway policy | Use allowed tools only |
Enable "Continue on Fail" in node settings to handle errors gracefully in your workflow.
Development
Project Structure
n8n-nodes-openclaw-chat/
├── src/
│ ├── credentials/
│ │ └── OpenClawApi.credentials.ts
│ ├── nodes/
│ │ └── OpenClaw/
│ │ ├── OpenClawChat.node.ts
│ │ └── GenericFunctions.ts
│ └── index.ts
├── package.json
├── tsconfig.json
└── README.mdBuild from Source
# Clone repository
git clone https://github.com/EncryptShawn/n8n-nodes-openclaw-chat.git
cd n8n-nodes-openclaw-chat
# Install dependencies
npm install
# Build TypeScript
npm run build
# Run tests
npm test
# Lint code
npm run lintTesting with Local n8n
- Build the node:
npm run build - Copy to n8n custom nodes directory:
cp -r dist ~/.n8n/custom/n8n-nodes-openclaw-chat - Restart n8n
- The node will appear as "OpenClaw Chat" in the node panel
API Reference
For complete OpenClaw Gateway HTTP API documentation, see:
Troubleshooting
Node Not Appearing
- Check n8n logs for loading errors
- Verify node is in
~/.n8n/custom/directory - Restart n8n instance
- Check n8n version compatibility (requires n8n 1.0+)
Authentication Errors
{
"error": "OpenClaw API request failed: HTTP 401"
}- Verify base URL includes protocol (
http://orhttps://) - Check API token matches gateway token
- Test connection with curl:
curl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:18789/health
Endpoint Errors
{
"error": "OpenClaw API request failed: HTTP 404"
}- Ensure endpoint is enabled in gateway config:
{ "gateway": { "http": { "endpoints": { "chatCompletions": { "enabled": true }, "responses": { "enabled": true } } } } } - Restart gateway after config changes
Streaming Issues
- Ensure you're using "Create Streaming" operation
- Check n8n version supports streaming responses
- Verify network connectivity to gateway
Contributing
- Fork the repository
- Create a feature branch
- Make changes with tests
- Submit a pull request
Please follow the n8n Node Development Guidelines.
Example Workflows
Check the examples/ directory for ready-to-import n8n workflows:
- Basic Chat Completion (
examples/basic-chat-completion.json) – Simple manual trigger asking OpenClaw to list installed skills
To import a workflow in n8n:
- Go to Workflows → Import from file
- Select the JSON file
- Configure your OpenClaw API credentials (see Configuration)
- Execute the workflow
License
MIT © EncryptShawn
Support
- Issues: GitHub Issues
- Documentation: OpenClaw Docs
- Community: OpenClaw Discord
Happy Automating! 🚀
