clawdbot-dingtalk
v0.4.8
Published
DingTalk (钉钉) channel plugin for Openclaw - enables AI agent messaging via DingTalk Stream API
Maintainers
Readme
clawdbot-dingtalk
DingTalk (钉钉) channel plugin for OpenClaw — enables AI agent messaging via DingTalk Stream API.
Installation
openclaw plugins install clawdbot-dingtalkPlugin ID:
clawdbot-dingtalk(used inplugins.allow/plugins.entries) NPM package:clawdbot-dingtalk
Configuration
Edit ~/.openclaw/openclaw.json:
{
"extensions": ["clawdbot-dingtalk"],
"plugins": {
"entries": {
"clawdbot-dingtalk": {
"enabled": true,
"config": {
"aliyunMcp": {
"timeoutSeconds": 60,
"tools": {
"webSearch": { "enabled": false },
"codeInterpreter": { "enabled": false },
"webParser": { "enabled": false },
"wan26Media": { "enabled": false, "autoSendToDingtalk": true }
}
}
}
}
}
},
"channels": {
"clawdbot-dingtalk": {
"enabled": true,
"clientId": "your-dingtalk-client-id",
"clientSecret": "your-dingtalk-client-secret"
}
},
"models": {
"providers": {
"dashscope": {
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "sk-xxx",
"api": "openai-completions",
"models": [
{ "id": "qwen3-coder-plus", "contextWindow": 1000000, "maxTokens": 65536 }
]
}
}
},
"agents": {
"defaults": {
"model": { "primary": "dashscope/qwen3-coder-plus" }
}
},
"tools": {
"web": {
"search": {
"enabled": false
}
}
}
}Aliyun MCP Tools (Optional)
The plugin can register 4 built-in DashScope MCP tools, each with an independent switch:
web_search->WebSearchaliyun_code_interpreter->code_interpreter_mcpaliyun_web_parser->WebParseraliyun_wan26_media->Wan26Media(supports auto-send back to current DingTalk session)
All four switches default to false (disabled).
When plugin web_search is disabled and tools.web.search.enabled=false, no web search tool is available.
API key priority (high -> low):
DASHSCOPE_MCP_<TOOL>_API_KEYDASHSCOPE_API_KEYplugins.entries.clawdbot-dingtalk.config.aliyunMcp.apiKey
MCP Availability and Fallback Behavior
- The agent uses availability-first behavior for all four built-in MCP tools.
- If a tool is disabled / not activated in Bailian / missing API key, the agent should briefly explain this and continue with a practical fallback instead of stalling.
aliyun_web_parserworks best with publicly accessible HTTP/HTTPS pages; login-only pages may fail.aliyun_wan26_mediacan involve async task flow (submit + fetch result). The agent should not claim generation is complete before final success status is returned.
DashScope Thinking Mode (Native)
DashScope's enable_thinking is natively supported via the /think command. No proxy is needed.
To enable thinking mode for a session:
/think onOr use one-shot thinking for a single message:
/t! on 请帮我分析这段代码Thinking levels: off, minimal, low, medium, high, xhigh./think on maps to high (OpenClaw gateway "high").
Reasoning Visibility (Optional)
Use /reasoning on to show model reasoning in replies (rendered as subtle Markdown blockquotes).
Use /reasoning off to hide it.
Start Gateway
openclaw gatewayConfiguration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| enabled | boolean | true | Enable/disable the channel |
| clientId | string | - | DingTalk app Client ID (required) |
| clientSecret | string | - | DingTalk app Client Secret (required) |
| clientSecretFile | string | - | Path to file containing client secret |
| replyMode | "text" | "markdown" | "text" | Message format |
| maxChars | number | 1800 | Max characters per message chunk |
| allowFrom | string[] | [] | Allowlist of sender IDs (empty = allow all) |
| requirePrefix | string | - | Require messages to start with prefix |
| isolateContextPerUserInGroup | boolean | false | When enabled, isolate session context per user in group chats |
| responsePrefix | string | - | Prefix added to responses |
| tableMode | "code" | "off" | "code" | Table rendering mode |
| showToolStatus | boolean | false | Show tool execution status |
| showToolResult | boolean | false | Show tool results |
| blockStreaming | boolean | true | Enable/disable block streaming before final reply |
| thinking | string | "off" | Thinking mode (off/minimal/low/medium/high) |
AI Card (高级互动卡片)
Enable AI Card capability via config:
{
"channels": {
"clawdbot-dingtalk": {
"aiCard": {
"enabled": true,
"templateId": "your-template-id",
"autoReply": true,
"textParamKey": "content",
"defaultCardData": {
"title": "OpenClaw"
},
"callbackType": "STREAM",
"updateThrottleMs": 800,
"fallbackReplyMode": "markdown",
"openSpace": {
"imGroupOpenSpaceModel": {
"openConversationId": "cidxxx"
}
}
}
}
}
}Notes:
callbackTypeshould beSTREAMto receive card callbacks over Stream API.autoReply=true会把普通文本回复映射成卡片变量。blockStreaming=true(默认)会优先发送 block 增量;设为false时退化为 final-only(卡片仍可完成,但不增量)。- 新版默认会写入
msgContent与text两个键;若配置了textParamKey,还会同时写入该键,便于兼容不同模板变量命名。 - Inbound AI card now uses a true streaming state machine:
- create card instance
- deliver to target openSpace
- set
flowStatus=2(INPUTING) - push incremental chunks via
PUT /v1.0/card/streaming - finalize with
isFinalize=trueand then setflowStatus=3(FINISHED)
updateThrottleMscontrols non-final streaming update frequency; final update always flushes.- If
openSpace/openSpaceIdis missing, card delivery falls back to text.
Required DingTalk permissions for streaming card:
Card.Streaming.WriteCard.Instance.Writeqyapi_robot_sendmsg
Chat Commands
The following chat switches are supported in DingTalk:
/new- Reset session context/think [off|minimal|low|medium|high]- Set thinking level (/think on=>high)/t! [off|minimal|low|medium|high|on] <message>- One-shot thinking (does not persist)/reasoning [on|off|stream]- Toggle reasoning visibility/model <provider/model>- Switch model/models [provider]- List providers or models under a provider/verbose on|off|full- Toggle non-final updates (tool/block)
Notes:
- Commands respect
allowFromandrequirePrefix(in group chats). - Inline usage is supported (e.g., "帮我看看 /model openai/gpt-4o").
Multi-account Configuration
{
"channels": {
"clawdbot-dingtalk": {
"accounts": {
"bot1": {
"enabled": true,
"clientId": "client-id-1",
"clientSecret": "secret-1",
"name": "Support Bot"
},
"bot2": {
"enabled": true,
"clientId": "client-id-2",
"clientSecret": "secret-2",
"name": "Dev Bot"
}
}
}
}
}Message Coalescing
Control how streaming messages are batched before sending:
{
"channels": {
"clawdbot-dingtalk": {
"coalesce": {
"enabled": true,
"minChars": 800,
"maxChars": 1200,
"idleMs": 1000
}
}
}
}Running as a Service
Using systemd (Linux)
Create /etc/systemd/system/openclaw.service:
[Unit]
Description=OpenClaw Gateway
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/openclaw gateway
Restart=always
RestartSec=10
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.targetsudo systemctl enable openclaw
sudo systemctl start openclawUsing PM2
npm install -g pm2
pm2 start "openclaw gateway" --name openclaw
pm2 save
pm2 startupDingTalk Setup
- Go to DingTalk Open Platform
- Create an Enterprise Internal Application
- Enable "Robot" capability
- Get Client ID and Client Secret from "Credentials & Basic Info"
- Configure the robot's messaging subscription
License
MIT
