n8n-nodes-miniagent
v0.2.2
Published
Lightweight AI Agent node for n8n - zero dependencies, built-in memory, RAG support, multi-LLM
Downloads
401
Maintainers
Readme
n8n-nodes-miniagent
Lightweight AI Agent node for n8n - zero dependencies, built-in memory, multi-LLM support.
Features
- Zero Dependencies: No LangChain or external SDKs - just pure TypeScript with native
fetch - Multi-LLM Support: Works with Gemini, Claude (Anthropic), and any OpenAI-compatible API
- Built-in Memory: Conversation history that persists across executions
- Tool Calls Saved: Unlike n8n's AI Agent, this saves tool calls in memory (fixes issue #14361)
- ReAct Pattern: Implements Reasoning + Acting for intelligent task completion
- Fully Serverless: No external servers or databases required
- n8n Cloud Ready: Designed to pass n8n Cloud approval
Installation
In n8n Cloud
Search for "Mini Agent" in the community nodes section.
Self-hosted
npm install n8n-nodes-miniagentOr install via n8n Settings > Community Nodes.
Supported LLM Providers
| Provider | Models | Notes | |----------|--------|-------| | Gemini | gemini-pro, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash | Google AI Studio API | | Anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3.5-sonnet | Claude API | | OpenAI Compatible | gpt-4, gpt-4o, gpt-3.5-turbo, llama, mistral, etc. | Works with OpenAI, OpenRouter, Groq, Ollama, LM Studio |
Operations
Chat
Send a message and get a response. No memory - each call is independent.
Chat with Memory
Chat with conversation history preserved. Great for multi-turn conversations.
Clear Memory
Clear the conversation history for a specific session.
Get Memory
Retrieve the current conversation history for debugging.
Tools
Tools allow the agent to perform actions. Define them as a JSON array:
Code Tool Example
[
{
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The math expression to evaluate"
}
},
"required": ["expression"]
},
"code": "return eval(expression)"
}
]HTTP Tool Example
[
{
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
},
"http": {
"url": "https://api.weather.example/current",
"method": "GET",
"queryParams": {
"q": "{{city}}"
}
}
}
]Memory Types
Buffer (Volatile)
- Stored in memory
- Fast access
- Lost when n8n restarts
- Good for: Testing, short-lived sessions
Workflow Static Data (Persistent)
- Stored in n8n's workflow data
- Survives n8n restarts
- Good for: Production use, important conversations
Options
| Option | Default | Description | |--------|---------|-------------| | Temperature | 0.7 | Controls randomness (0-2) | | Max Tokens | 4096 | Maximum response length | | Max Iterations | 10 | Maximum tool-use loops | | Max Memory Messages | 50 | Messages to keep in history | | Include Tool Calls | true | Save tool calls in memory |
Why Mini Agent?
Problems with n8n's AI Agent (LangChain-based):
- Tool calls not saved in memory - Agent stops using tools after a few turns
- Heavy dependencies - LangChain adds complexity and version conflicts
- Memory requires external nodes - No built-in persistent storage
- Difficult to customize - Tied to LangChain's abstractions
Mini Agent solves these:
- All messages saved - Including tool calls and results
- Zero dependencies - Just TypeScript and fetch
- Built-in memory - Buffer and persistent storage included
- Simple architecture - Easy to understand and extend
Example Workflow
[Webhook] → [Mini Agent: Chat with Memory] → [Respond to Webhook]The agent will:
- Load conversation history for the session
- Process the user's message
- Use tools if needed (with proper memory of tool usage)
- Save the updated conversation
- Return the response
License
MIT
Author
Mauricio Perera ([email protected])
