@agentled/mcp-server
v0.15.19
Published
MCP server for Agentled — intelligent AI workflow orchestration with long-term memory, 100+ integrations, and unified credits.
Maintainers
Readme
@agentled/mcp-server
The automation engine built for AI agents. Intelligent AI workflow orchestration with long-term memory, 100+ integrations, and unified credits.
What is Agentled?
Agentled is the automation engine built for AI agents. It gives Claude, Codex, Cursor, Windsurf, and any MCP-compatible client direct access to intelligent workflow orchestration, long-term memory, and 100+ integrations.
Three things make it different:
🧠 Long-Term Memory — A built-in Knowledge Graph stores insights across workflow executions. Your agents get smarter over time — they remember past research, lead scores, content performance, and business context.
⚡ Unified Credits — One API key, one credit system, 100+ services. No need to sign up for LinkedIn, email, scraping, AI models, or video generation separately. Connect once, use everything.
🎯 Intelligent Orchestration — AI reasons at every step. Workflows aren't just "if this then that" — they understand context, make decisions, and adapt to results.
See it in action
$ agentled create "Outbound to fintech CTOs in Europe"
Loading workspace context from Knowledge Graph...
✦ ICP loaded ✦ 3 prior campaigns ✦ 847 contacts in KG
Creating campaign with 3 workflows...
━━ Workflow 1: Prospect Research linkedin · hunter · clearbit
✓ LinkedIn: CTO + fintech + EU → 189 profiles
✓ Enriched via Hunter + Clearbit → 156 matched
✓ ICP scoring → 43 high-intent leads
━━ Workflow 2: Signal Detection web-scraper · crunchbase
✓ Job postings → 12 hiring devops
✓ Crunchbase → 8 recently funded
✓ Cross-match: hiring + funded → 5 hot leads
━━ Workflow 3: Outreach email · linkedin · kg
✓ Personalized emails from context
✓ LinkedIn requests with custom notes
✓ 43 leads saved to Knowledge Graph
Campaign saved. Scheduled: every 48h
Credits used: 720
→ https://www.agentled.app/your-team/fintech-cto-outboundOne prompt. Three workflows. LinkedIn enrichment, email finding, AI scoring, multi-channel outreach — all orchestrated, all stored in the Knowledge Graph for the next run.
Quick Start
claude mcp add --transport stdio --scope user agentled \
-e AGENTLED_API_KEY=wsk_... \
-- npx -y @agentled/mcp-server--scope user registers the server in your user MCP config so it loads in every project (not only the repo where you ran the command). Use a distinct server name (e.g. agentled_my_workspace) if you add multiple workspaces. For team-shared config in git, use --scope project and .mcp.json instead (Claude Code MCP scopes).
Local development
Use the local built entrypoint when you want to test unpublished changes against a
local app. npx -y @agentled/mcp-server always uses the latest published npm package.
cd agentled-mcp-server
npm run build
claude mcp add --transport stdio agentled_local \
--env AGENTLED_API_KEY=wsk_... \
--env AGENTLED_URL=http://localhost:8080 \
-- node /absolute/path/to/agentsled-front/agentled-mcp-server/dist/index.jsGetting your API key
- Sign up at agentled.app
- Open Workspace Settings > Developer
- Generate a new API key (starts with
wsk_)
Why Agentled MCP?
One API Key. One Credit System. 100+ Services.
No need to sign up for LinkedIn APIs, email services, web scrapers, video generators, or AI models separately. Agentled handles all integrations through a single credit system.
| Capability | Credits | Without Agentled | |-----------|---------|-----------------| | LinkedIn company enrichment | 50 | LinkedIn API ($99/mo+) | | Email finding & verification | 5 | Hunter.io ($49/mo) | | AI analysis (Claude/GPT/Gemini) | 10-30 | Multiple API keys + billing | | Web scraping | 3-10 | Apify account ($49/mo+) | | Image generation | 30 | DALL-E/Midjourney subscription | | Video generation (8s scene) | 300 | RunwayML ($15/mo+) | | Text-to-speech | 60 | ElevenLabs ($22/mo+) | | Knowledge Graph storage | 1-2 | Custom infrastructure | | CRM sync (Affinity, HubSpot) | 5-10 | CRM API + middleware |
Workflows That Learn
Other automation tools start from zero every run. Agentled's Knowledge Graph remembers across executions — what worked, what didn't, what humans corrected. Scoring workflows can use compact row-level scoring_profile summaries and bounded scoring-memory retrieval so every run compounds on the last without dumping raw history into prompts.
Run 1: Investor scoring → 62% accuracy (cold start)
Run 5: → 78% (learning from IC feedback)
Run 12: → 89% (compound learning from outcomes, zero manual tuning)Intelligent Orchestration
Unlike trigger-action tools, Agentled workflows have AI reasoning at every step. Multi-model support (Claude, GPT-4, Gemini, Mistral, DeepSeek, Moonshot), adaptive execution, and human-in-the-loop approval gates when needed.
Agent Teams
Agent Teams let you run multiple AI specialists in a single workflow step. Pick a preset and describe what you need — the team handles coordination, delegation, and synthesis.
"Add an Agent Team step that researches the company and produces an investment memo"Six built-in presets cover the most common patterns:
| Preset | What it does |
|--------|-------------|
| research-and-summarize | Specialists gather information, one synthesizes a summary |
| analyze-and-recommend | Multiple analysts evaluate options, produce a ranked recommendation |
| generate-then-review | A generator drafts content, reviewers critique and refine |
| compare-options | Specialists argue for competing options, coordinator arbitrates |
| investigate-in-parallel | Independent specialists explore different angles simultaneously |
| review-and-improve | Reviewers find issues, an editor applies improvements |
When creating Agent Team steps via MCP, include preset metadata so the step opens correctly in the builder:
{
"id": "analyze",
"type": "agentOrchestrator",
"name": "Agent Team",
"orchestratorConfig": {
"pattern": "supervisor",
"workers": [
{ "id": "researcher", "name": "Researcher", "systemPrompt": "Research {{input.company_url}} — team, funding, market position" },
{ "id": "analyst", "name": "Analyst", "systemPrompt": "Analyse the research. Identify risks and growth signals." }
]
},
"metadata": {
"agentTeamPreset": "research-and-summarize",
"agentTeamMode": "simple",
"agentTeamUxVersion": 1
},
"next": { "stepId": "milestone" }
}Existing steps created with raw orchestratorConfig and no metadata continue to work — they open in advanced mode in the builder without errors.
Analytics vs ROI semantics
When describing workflow outcomes, keep these terms separate:
pipeline.analyticsConfig= business metrics (execution outcome stats shown in Business Metrics cards/charts).pipeline.metadata.roi= ROI assumptions/rollups (time saved and cost-value estimates).
If you update one without the other, name exactly what changed (e.g. "business metrics configured" vs "ROI assumptions configured").
CLI parity guard
The repository includes an automated parity guard so MCP tool additions do not silently drift from the CLI surface.
- Test:
__tests__/cli/cli-mcp-parity.test.ts - Docs:
docs/CLI_MCP_PARITY.md
Run it with:
yarn test:node -- cli-mcp-parity.test.tsWhat Can You Build?
Lead Enrichment & Sales Automation
"Find fintech CTOs in Europe, enrich via LinkedIn + Hunter, score by ICP fit,
draft personalized outreach, save everything to the Knowledge Graph"Content & Media Production
"Scrape trending topics in our niche, generate 5 LinkedIn posts with AI,
create thumbnail images, schedule publishing for the week"Company Research & Intelligence
"Research this company from its URL — team, funding, market position, competitors.
Generate an investment memo. Store in KG for future reference."VC Investor Matching (real case study)
"Match this startup against our 2,000+ investor database. Score by sector focus,
stage preference, check size, and portfolio synergy. Compare with last round's outcomes."3,000+ profiles processed. IC-ready reports. Prediction vs outcome learning — accuracy went from 62% to 89% over 12 runs with zero manual tuning.
Built-in Capabilities
Media Production: Video generation, image generation, text-to-speech, auto-captions, media assembly
AI Intelligence: Multi-model AI (Claude, GPT-4, Gemini, Mistral, DeepSeek, Moonshot, xAI), Knowledge Graph, feedback loops, scoring & analytics
Data & Integration: LinkedIn (search, enrich, post), email (send, personalize), web scraping, social publishing, CRM sync, document analysis, OCR
Available Tools
Workflows
| Tool | Description |
|------|-------------|
| list_workflows | List all workflows in the workspace |
| get_workflow | Get full workflow definition by ID |
| create_workflow | Create a new workflow from pipeline JSON |
| update_workflow | Update an existing workflow (top-level scalars; for context/metadata prefer update_workflow_context) |
| update_workflow_context | Workflow-level analog of update_step — three explicit verbs (updates / replace / unset) on context.* and metadata.* paths, returns diff + warnings |
| add_step | Add a step with automatic positioning and next-pointer rewiring |
| update_step | Deep-merge updates into a single step by ID |
| remove_step | Remove a step with automatic next-pointer rewiring |
| delete_workflow | Permanently delete a workflow |
| validate_workflow | Validate pipeline structure, returns errors per step |
| publish_workflow | Change workflow status (draft, live, paused, archived) |
| export_workflow | Export a workflow as portable JSON |
| import_workflow | Import a workflow from exported JSON |
Public Form Links
Public form links are the external intake surface for workflows with
context.executionInputConfig fields. Use them when people outside the
workspace need to submit a workflow form without signing in: inbound lead
forms, pitch deck submissions, referral forms, support intake, assessment
questionnaires, or any workflow whose first step is a manual/input trigger.
Do not use a public form link for internal child workflows. Child workflows
should use context.executionInputConfig.internal: true and be called from
another workflow with agentled.call-workflow.
| Tool | Description |
|------|-------------|
| list_public_form_links | List existing public form links for a workflow |
| create_public_form_link | Create and enable a public form link |
| update_public_form_link | Enable/disable a link or update limits, expiry, auto-share, and thank-you copy. To revoke external access, set enabled: false. |
Deletion is intentionally not exposed via the external API or MCP. To revoke a public form link, call
update_public_form_linkwithenabled: false. Permanent deletion requires an authenticated workspace member acting through the UI — destructive ops on the form-link surface are not granted to the public API key.
Typical agent flow:
1. get_workflow({ workflowId })
2. Confirm context.executionInputConfig exists and is not internal.
3. list_public_form_links({ workflowId })
4. If none exists, create_public_form_link({ workflowId, enabled: true })
5. Return the publicUrl to the user.The public URL is /en/forms/{formLinkId}. On submit, Agentled validates the
form link, starts the workflow with the submitted input, records a
PublicFormSubmission, and increments submissionCount. Optional settings:
enabled: disable without deleting the link.expiresAt: ISO datetime expiry.submissionLimit: maximum accepted submissions.autoShare: when true, the public form status page can show generated results after completion. Use this only when the workflow output is safe for the submitter to see.shareExpiresInDays: expiry for auto-shared result links.successMessage: custom thank-you message after submission.
Internal-only Workflows
Mark a workflow as a child / sub-workflow that is only run via agentled.call-workflow from an orchestrator by setting context.executionInputConfig.internal: true. The UI then hides the Run button and replaces the manual run form with an info banner. Inputs are still validated and passed by orchestrators via executionInputData exactly as before — this is a UI guard, not a runtime restriction.
Use it for any workflow whose goal/description starts with "Internal sub-workflow", that ends in a return step, or that you only intend to invoke from another workflow.
{
"context": {
"executionInputConfig": {
"title": "Save Sourced Candidates",
"internal": true,
"fields": [{ "name": "candidates", "label": "Candidates", "type": "text", "required": true }]
}
}
}Flip the flag via update_workflow_context — fetch first, merge locally, replace at the parent level (the merge-order trap from update_step applies here too — see docs/MCP_STEP_EDITING.md):
// 1. get_workflow → read context.executionInputConfig
// 2. local: { ...executionInputConfig, internal: true }
// 3.
{
"updates": { "context": { "executionInputConfig": {...full merged value...} } },
"replace": ["context.executionInputConfig"]
}Editing existing workflows: merge model
update_step accepts three explicit operations on the same call. At least one must be non-empty.
updates— partial step patch, deep-merged ONE LEVEL deep. Top-level scalars are replaced; nested objects (pipelineStepPrompt,stepInputData, etc.) get their direct keys merged with the stored value's keys. Keys nested two levels deep are overwritten as a unit, not merged.replace: string[]— dot-paths whose values fromupdatesare assigned wholesale, skipping the deep-merge. Use this for dictionary-shaped fields where keys are user data (not config) — patching one inner key withupdatesalone silently wipes the others.unset: string[]— dot-paths to delete. Each path must currently exist on the step (validated against the original).
Read before editing dictionary fields. Before changing stepInputData.fieldUpdates, pipelineStepPrompt.responseStructure, knowledgeSync.fieldMapping, or any field where keys are user data: call get_step({ workflowId, stepId }) (~1KB), modify locally, send the full new object back via replace[]. This avoids the "patched one key, silently wiped the others" trap.
Diff in the response. Every update_step call returns diff: { addedPaths, changedPaths, removedPaths } and warnings[]. If the merge silently removed ≥6 fields without an explicit unset, a warning fires.
What to use where:
| Path / field | API | How to edit | Notes |
|---|---|---|---|
| name, goal, description, pipelineStepPrompt.template, creditCost | update_step | updates | Plain scalar; safe to send alone. |
| next, loopConfig, entryConditions (full block) | update_step | updates | Direct nested config; sending the new value wholesale is fine. |
| tools, integrations | update_step | updates | Arrays replace wholesale by design. To append, fetch with get_step, splice locally, send the full new array. |
| stepInputData.fieldUpdates | update_step | get_step → updates (full dict) + replace: ["stepInputData.fieldUpdates"] | Keys are user data; default one-level merge replaces this dict and can drop sibling mappings. |
| pipelineStepPrompt.responseStructure | update_step | get_step → updates + replace: ["pipelineStepPrompt.responseStructure"] | Output-shape dictionary; treat as user data. |
| knowledgeSync.fieldMapping | update_step | get_step → updates + replace: ["knowledgeSync.fieldMapping"] | Source→target dict; same trap as fieldUpdates. |
| renderer.config (when preserving sibling keys matters) | update_step | updates (full renderer.config) + replace: ["renderer.config"] | ⚠ replace: ["renderer.config.layout"] does NOT protect renderer.config's siblings — one-level deep-merge runs first on updates.renderer. Replace at the parent level. |
| entryConditions.criteria (when preserving the rest of entryConditions) | update_step | updates: { entryConditions: {...full block...} } | Send the full entryConditions block; one-level merge already does the right thing for direct children. |
| Removing a step input or stale field | update_step | unset: ["stepInputData.oldKey"] | Cleanest way to remove. Path must exist on the original. |
| context.inputPages, context.outputPages, context.executionInputConfig | update_workflow_context | Three explicit verbs (updates / replace / unset) on workflow-relative paths. Compatibility: { contextKey, value } still accepted for wholesale per-key replacement. | Workflow-level, not step-level. update_step cannot reach context.* and vice versa. |
| metadata | update_workflow_context | Same three verbs on metadata.* paths | Workflow-level. Metadata bypasses the draft snapshot — even on live workflows it writes direct to the Pipeline row, immediately. |
Type changes. step.type is technically mutable but stale type-specific fields (pipelineStepPrompt, app, tools, orchestratorConfig) persist unless you unset them. For clean conversions, prefer remove_step + add_step.
Live workflows. Edits are routed to a draft snapshot. Response includes editingDraft: true. Inspect via get_draft, ship via promote_draft, throw away via discard_draft. For high-stakes edits, create_snapshot first as a manual checkpoint.
Draft staleness. When a draft exists, every update_step and get_step response includes a draft summary with exists, draftCreatedAt, liveUpdatedAt, stale, modifiedStepIds, and modifiedFields. If draft.stale === true, the live workflow advanced after the draft was created — promoting will land the draft's older values for fields you didn't touch. update_step also emits a staleness warning. Recovery: discard_draft and re-apply.
⚠ discard_draft only reverts pending context (and step) changes — NOT metadata. Metadata writes via update_workflow_context bypass the draft and apply immediately to the live Pipeline row. If you need a single rollback point covering metadata too, create_snapshot before the edit. See docs/MCP_STEP_EDITING.md for the full atomicity contract.
Never send a full steps[] array via update_workflow. Use update_step, add_step, remove_step instead.
For the deep reference (StepMergeError codes, dot-path validation rules, full diff semantics) see docs/MCP_STEP_EDITING.md.
Drafts & Snapshots
| Tool | Description |
|------|-------------|
| get_draft | Get the current draft version of a workflow |
| promote_draft | Promote a draft to the live version |
| discard_draft | Discard the current draft |
| create_snapshot | Create a manual config snapshot |
| delete_snapshot | Delete a specific config snapshot |
| list_snapshots | List version snapshots for a workflow |
| get_snapshot_content | Read a snapshot's full config (steps, context, etc.) without restoring it |
| restore_snapshot | Restore a workflow to a previous snapshot |
Executions
| Tool | Description |
|------|-------------|
| start_workflow | Start a workflow execution with input. Pass useMocks: false to force a real (credit-consuming) run that ignores per-step mock data; defaults to honoring the workflow's configured mocks. |
| list_executions | List executions for a workflow (paginated via nextToken) |
| get_execution | Get execution details with step results |
| list_timelines | List step execution records (timelines) for an execution (paginated via nextToken) |
| get_timeline | Get a single timeline by ID with full step output |
| stop_execution | Stop a running execution |
| retry_execution | Retry a failed step — auto-detects the most recent failure if no timeline ID provided |
| rerun | Rerun or retry any step by timelineId — works for failed AND succeeded steps, disambiguates loop iterations |
Apps & Testing
| Tool | Description |
|------|-------------|
| list_apps | List available apps and integrations |
| get_app_actions | Get action schemas for an app |
| test_app_action | Test an app action without creating a workflow |
| test_ai_action | Test an AI prompt without creating a workflow |
| test_code_action | Test JavaScript code in the same sandboxed VM as production |
| get_step_schema | Get allowed PipelineStep fields grouped by category |
AI step types: aiAction vs aiActionWithTools
Pick the right type — validate_workflow will reject the wrong one:
| You need… | Use |
|-----------|-----|
| Reason over inputs already present in the prompt variables | aiAction (single LLM call, no tool loop) |
| Live web search, workspace memory recall/write, knowledge-graph lookup | aiActionWithTools with the matching builtinType |
| The AI to decide at runtime what inputs to pass to an app action | aiActionWithTools with an appActionConfig tool |
aiActionWithTools requires at least one tool — placed under step.tools or step.agent.tools (both are merged at runtime). If you omit tools from both locations, validate_workflow returns a blocker AI_STEP_TOOLS_REQUIRED. If the prompt says "search the web" / "recall memory" / "knowledge graph" without the matching tool attached, you get a warning AI_STEP_TOOL_PROMPT_MISMATCH.
Valid builtinType values: web_search, file_search, code_interpreter, fetch_website_content, kg_search, kg_traverse, kg_nodes, kg_write, workspace_memory.
// aiActionWithTools example
{
"id": "research",
"type": "aiActionWithTools",
"name": "Research Company",
"tools": [
{ "type": "builtin", "builtinType": "web_search", "name": "Web Search" }
],
"pipelineStepPrompt": {
"template": "Search the web for the founder of {{input.company}} and return their name.",
"responseStructure": { "firstName": "string", "lastName": "string" }
},
"creditCost": 10,
"next": { "stepId": "find-email" }
}Knowledge & Data
| Tool | Description |
|------|-------------|
| get_workspace | Get workspace info and settings |
| get_workspace_company_profile | Get the editable workspace company profile and company knowledge text |
| update_workspace_company_profile | Update top-level company profile fields like name, URLs, logo, industry, size, and additional information |
| list_knowledge_lists | List knowledge lists in the workspace |
| get_knowledge_rows | Get rows from a knowledge list (paginated via nextToken, max 200) |
| get_knowledge_rows_by_ids | Fetch specific rows by ID (max 200) — use after query_kg_edges |
| get_knowledge_text | Get text content from a knowledge entry |
| create_knowledge_list | Create a new knowledge list with a typed schema (idempotent on key collision) |
| update_knowledge_list_schema | Add or remove fields on an existing list schema |
| delete_knowledge_list | Permanently delete a list and all its rows |
| upsert_knowledge_rows | Insert or update rows in a list (max 500/call, per-row error reporting) |
| delete_knowledge_rows | Delete rows by ID |
| upsert_knowledge_text | Create or update a text knowledge entry |
| delete_knowledge_text | Delete a text knowledge entry by key |
| query_kg_edges | Query knowledge graph edges |
| get_scoring_history | Get scoring history for an entity |
Branding (Whitelabel)
| Tool | Description |
|------|-------------|
| get_branding | Get the workspace's whitelabel branding config (displayName, logo, colors, favicon, badge) |
| update_branding | Update branding — set displayName, logoUrl, tagline, primaryColor, primaryColorDark, faviconUrl, hideBadge |
Agents
First-class workspace agents with identity, instructions, tools, config files, and assigned workflows. All agents are conversational (chat-only). For scheduled/autonomous work, attach routines via create_routine. An agent created entirely via MCP renders identically to one built in the Agent Wizard.
| Tool | Description |
|------|-------------|
| list_agents | List agents in the workspace (filter by status: active, paused, draft) |
| get_agent | Get full agent config — instructions, files, workflows, attached routines |
| create_agent | Create an agent. Accepts agentType presets (personal-assistant, competitive-researcher, social-media-marketer, customer-support, content-marketer, lead-qualifier, deal-sourcer, custom), enabledApps, assignedWorkflowIds, linkedFileIds, configFiles (SOUL.md/TOOLS.md), avatar_icon_name, avatar_color, chatModel, activate: true |
| update_agent | Partial update — same fields as create_agent; updates.slug renames the agent email slug, moves the AgentEntity id to {slug}@{workspace}, and rebinds routines/file links/channel sessions/chat sessions where available |
| activate_agent | Activate an agent (draft/paused → active). Attached routines begin running on schedule |
| pause_agent | Pause an active agent. Attached routines stop until resumed |
| manage_agent_workflows | Add/remove/set the workflows assigned to an agent without rewriting the full config |
| delete_agent | Permanently delete an agent and all its files |
| chat_with_agent | Send a message to a specific agent. Multi-turn via session_id |
Agent Files
| Tool | Description |
|------|-------------|
| list_agent_files | List files attached to an agent (knowledge, context, reference docs) |
| get_agent_file | Get the content of a specific agent file |
| upload_agent_file | Upload a file (max 400KB text/markdown) to an agent |
| delete_agent_file | Delete a file from an agent |
Routines
Routines are scheduled prompts attached to an agent — the agent evaluates the prompt on a set interval and can trigger workflows or send notifications.
Example — add a daily deal-sourcer routine to an existing agent:
# Step 1: create the agent
create_agent({
name: "Daily Deal Sourcer",
agentType: "deal-sourcer",
enabledApps: ["agentled", "kg", "web-scraping"],
assignedWorkflowIds: ["<opportunity-scoring-workflow-id>"],
activate: true
})
# Step 2: attach a routine
create_routine({
agent_id: "<agent-id>",
name: "Daily Sourcing Run",
prompt: "Find 5 new SaaS startups that match our deal criteria and trigger the scoring workflow for each.",
interval: "daily"
})| Tool | Description |
|------|-------------|
| list_routines | List all routines for an agent |
| create_routine | Create a routine (name, prompt, interval) |
| update_routine | Update routine fields; recalculates nextRunAt if interval changes |
| pause_routine | Pause a routine |
| resume_routine | Resume a paused routine |
| delete_routine | Permanently delete a routine |
Interval values: weekday-morning, weekday-evening, weekly-monday, weekly-friday-evening, daily, 2h, 6h, 48h.
Proactive Agents (low-level runtime)
Prefer the high-level create_agent / update_agent tools. These direct-CRUD tools are for advanced runtime inspection.
| Tool | Description |
|------|-------------|
| list_proactive_agents | List ProactiveAgent records (the runtime primitive behind agents) |
| get_proactive_agent | Get full proactive agent config |
| create_proactive_agent / update_proactive_agent / delete_proactive_agent | Direct CRUD |
| trigger_proactive_agent | Force an immediate evaluation |
Channels (Email, Slack, WhatsApp, Signal)
Channels route inbound messages into the agent chat runtime. Each channel has a defaultAgentId that decides which agent handles the conversation. Replies are sent back through the originating channel.
| Tool | Description |
|------|-------------|
| list_channels | List configured channels with their defaultAgentId, enabled state, and non-secret config (secrets redacted) |
| set_channel_default_agent | Assign the agent that handles a channel's inbound conversations |
| configure_channel | Update non-secret channel config — enabled, defaultAgentId, allowedSenders (email), defaultChannelId (slack) |
| set_channel_defaults | Update workspace-wide defaults: maxSessionsPerDay, sessionTimeoutMinutes, toolMode |
Secret credentials (Slack bot tokens, signing secrets, WhatsApp access tokens, Signal webhook secrets) are NEVER readable or writable via the external API. Connect those via Settings → Channels in the UI — OAuth flows store them encrypted at rest.
Conversational Agent
| Tool | Description |
|------|-------------|
| chat | Send a message to the AgentLed AI agent. Build workflows through natural language — no JSON required. Supports multi-turn conversations via session_id. |
Chat Tool — Usage & Examples
The chat tool is a conversational AI agent that can reason, plan, and build workflows through dialogue. Think of it as the difference between gh api (raw) and gh copilot (intelligent).
Parameters:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| message | string | Yes | The message to send to the AI agent |
| session_id | string | No | Session ID from a previous response, for multi-turn conversations |
Response format:
{
"response": "The agent's reply — may include workflow suggestions, explanations, or confirmation of actions taken",
"sessionId": "mcp-chat-ws123-1711929600000"
}Multi-turn conversation:
# Turn 1: Describe what you want
chat("Build me a workflow that enriches LinkedIn companies and scores them by ICP fit")
# → Agent responds with a plan and creates a draft workflow
# → Response includes sessionId: "mcp-chat-ws123-1711..."
# Turn 2: Iterate on the design
chat("Add an email finding step using the company domain", session_id: "mcp-chat-ws123-1711...")
# → Agent modifies the workflow, maintaining conversation context
# Turn 3: Finalize
chat("Looks good, publish it as live", session_id: "mcp-chat-ws123-1711...")
# → Agent publishes the workflowWhen to use chat vs structured tools:
| Use chat when... | Use structured tools when... |
|---------------------|------------------------------|
| You have a high-level goal | You know the exact pipeline JSON |
| You want AI recommendations | You need precise control |
| You're exploring capabilities | You're automating a known pattern |
| You want to iterate through dialogue | You're doing bulk operations |
Notes:
- 5-minute timeout — break complex requests into smaller steps if needed
- Responses are non-streaming (collected then returned) for MCP client compatibility
- The agent has full access to planning tools, workflow builder, and workspace context
- Error responses include actionable messages (e.g., timeout → "try breaking it into smaller steps")
Intent Router
| Tool | Description |
|------|-------------|
| do | Natural language intent router — describe what you want and it auto-selects and executes the right tool |
Help & Feedback — Talk to the Agentled Team
For AI agents — pick the right channel:
| What you hit | Where it goes |
|-------------|---------------|
| Workflow not behaving as expected, app/integration question, platform/billing question, missing feature, anything about how Agentled works | Call submit_feedback_to_agentled (below). Routes straight to the team. |
| Bug in the CLI itself (agentled command crashes, writes wrong data to .agentled/, flag doesn't work) — a reproducible CLI code defect | File a GitHub issue at https://github.com/agentled/mcp-server/issues. Include CLI version + reproduction steps. |
| Anything else, or you're not sure | submit_feedback_to_agentled — the team will reroute if needed. |
Do not search npm metadata, scrape GitHub, or shell out to
ghto "find" a bug tracker — the routes above are the only ones. Thebugsfield in the package points at the human contact page (https://www.agentled.ai/en/contact-us) on purpose.
For humans: email [email protected] or visit https://www.agentled.ai/en/contact-us.
| Tool | Description |
|------|-------------|
| submit_feedback_to_agentled | Ask a question, file a bug, request a feature, or escalate an issue. Types: ask, bug, feature_request, escalation. Provide userEmail if you want a reply. |
Coming from n8n?
Import existing n8n workflows and make them AI-native:
| Tool | Description |
|------|-------------|
| preview_n8n_import | Preview an n8n workflow import (dry run) |
| import_n8n_workflow | Import an n8n workflow into Agentled |
Looking Up Entity-Scoped Data
When you need all records related to a specific entity, use the two-tool chain instead of paginating get_knowledge_rows:
Example 1 — all deals scored by an investor:
1. query_kg_edges({ entityName: "Investor Name", relationshipType: "SCORED" })
→ returns edges with targetNodeIds
2. get_knowledge_rows_by_ids({ rowIds: <targetNodeIds from step 1> })
→ returns full row data for each matched dealExample 2 — all leads sourced from a campaign:
1. query_kg_edges({ entityName: "Campaign Name", relationshipType: "SOURCED" })
→ returns edges with targetNodeIds
2. get_knowledge_rows_by_ids({ rowIds: <targetNodeIds from step 1> })
→ returns full contact/lead rowsWhy this matters: get_knowledge_rows is limited to 200 rows per call. At 3k rows that means 15 round trips; at 10k it means 50. The KG-edge path is O(edges for that entity) — independent of total list size — so it stays fast regardless of how large the list grows.
Node ID convention: source_node_id and target_node_id values from query_kg_edges are knowledge row IDs. Rows outside the authenticated workspace are silently excluded.
For Agencies: White-Label Ready
Build workflows once, deploy to multiple clients under your own brand. Configure branding directly from the MCP server:
"Set my workspace branding: displayName 'Acme AI', primaryColor '#6366f1', tagline 'Powered by Acme'"Use get_branding and update_branding to manage displayName, logo, colors, favicon, tagline, and badge visibility. Client portal appearance updates instantly.
Persistent Memory — Examples
Memories let workflows learn across executions. Store what worked, recall it next time.
Store a fact after enrichment
"Store a memory: key 'icp_criteria', value { industry: 'fintech', minEmployees: 50, region: 'EU' },
category 'preference', scope 'workspace'"Recall before scoring
"Recall memory 'icp_criteria' at workspace scope — use it to score this batch of leads"Search for past outcomes
"Search memories for 'conversion rate' in the 'outcome' category"Track a running metric
"Store memory: key 'total_leads_processed', value 43, merge 'increment', scope 'workspace'"Each subsequent call with merge: 'increment' adds to the existing value — no read-modify-write needed.
Proactive Agents — Examples
Proactive agents are background monitors that autonomously trigger workflows when conditions are met.
Create an agent that watches for new leads
"Create a proactive agent named 'New Lead Watcher' that checks the 'incoming-leads' knowledge list
every 5 minutes. When new rows appear, start the 'lead-enrichment' workflow with the new rows as input.
Limit to 10 actions per day."Config structure:
{
"monitorInterval": "5m",
"evaluation": { "mode": "rules" },
"monitors": [{
"type": "kg_list",
"listKey": "incoming-leads",
"condition": "new_rows"
}],
"actions": [{
"type": "start_workflow",
"workflowId": "wf_abc123",
"inputMapping": { "leads": "{{monitor.newRows}}" }
}],
"maxActionsPerDay": 10,
"cooldownMs": 300000
}Create an AI-evaluated agent
"Create a proactive agent that checks execution history every hour.
Use AI evaluation to decide if the failure rate is abnormal, then notify me via email."{
"monitorInterval": "1h",
"evaluation": { "mode": "ai", "modelTier": "mini", "maxCreditsPerDay": 50 },
"monitors": [{
"type": "execution_history",
"condition": "consecutive_failures",
"threshold": 3
}],
"actions": [{
"type": "notify",
"channel": "email",
"message": "{{monitor.summary}}"
}],
"maxActionsPerDay": 5
}Pause and resume
"Pause proactive agent pa_xyz789"
"Resume proactive agent pa_xyz789"Works With
- Claude Code (Anthropic)
- Codex (OpenAI)
- Cursor
- Windsurf
- Any MCP-compatible client
Links
Building from Source
git clone https://github.com/Agentled/mcp-server.git
cd mcp-server
npm install
npm run buildLicense
MIT
