n8n-nodes-better-ayla-agent
v1.7.8
Published
An improved Ayla Agent node for n8n with improved memory management and modern AI SDK integration
Maintainers
Readme
Better Ayla Agent for n8n
An improved AI Agent node for n8n that provides better memory management, modern AI SDK integration and a webhook option to push intermediate messages as they happen.
Features
- Conversation Memory that includes Tools – every user message, tool call and tool result is stored.
- Modern AI SDK Providers – wraps OpenAI (including GPT-5), Gemini and Anthropic through stable Vercel AI SDK v4.
- Live Streaming Updates: Intermediate Webhook URL lets you push each agent step in real-time
Installation
npm install n8n-nodes-better-ayla-agentCompatibility
This node is designed to be a drop-in replacement for the existing AI Agent node while providing enhanced functionality:
- ✅ Works with existing Language Model nodes
- ✅ Works with existing Memory nodes
- ✅ Works with existing Tool nodes
- ✅ Works with existing Output Parser nodes
- ✅ Maintains same input/output interface
Key Improvements Over Standard Agent
1. Memory Management
- Problem: Original agent doesn't save tool calls to memory
- Solution: Every interaction (human messages, AI responses, tool calls, tool results) is properly saved
2. Modern AI SDK
- Problem: Uses deprecated LangChain patterns
- Solution: Built on Vercel AI SDK for better performance and reliability
3. Simplified Configuration
- Problem: Complex agent type selection with lots of conditional logic
- Solution: Single, powerful agent that adapts to your needs
4. Two-Stage Generation with Utility Model (Cost Optimization)
- Problem: Powerful AI models can be expensive for every intermediate step, especially with frequent tool calls.
- Solution: This node now supports an optional "Utility Model" for cost-effective operations. When connected, a cheaper, faster utility model handles all intermediate reasoning steps, tool calls, and tool result processing. The more powerful "Main Model" is then exclusively used for generating the final, high-quality, user-facing response, significantly reducing overall token costs while maintaining response quality.
Usage
Basic Setup
- Add the node to your workflow
- Connect a Language Model (OpenAI, Anthropic, etc.) as the Main Model.
- Optionally connect a second Language Model as the Utility Model for cost-optimized two-stage generation.
- Optionally connect:
- Memory node for conversation persistence
- Tool nodes for enhanced capabilities
- Output Parser for structured responses
Input Sources
Choose how to provide the user prompt:
- Connected Chat Trigger Node: Automatically uses
chatInputfrom chat triggers - Define below: Use expressions or static text
Configuration Options
- System Message: Define the agent's behavior and personality
- Max Tool Calls: Limit the number of tool interaction rounds
- Intermediate Webhook URL: Send each partial reply/tool-call to an external workflow in real-time
- Verbose Logs: Enable/disable detailed console logging
- Temperature: Control response creativity (0.0 = deterministic, 1.0 = creative)
- Max Tokens: Set response length limits
Example Workflow
Chat Trigger → Better Ayla Agent → Response
↗ OpenAI Model
↗ Buffer Memory
↗ Calculator Tool
↗ Web Search ToolTechnical Details
Tool Call Memory
Unlike the original agent, this node ensures that all tool interactions are preserved in memory:
User: "What's 25 * 47 and then search for that number"
Assistant: [calls calculator tool]
Tool: "1175"
Assistant: [calls web search tool with "1175"]
Tool: [search results]
Assistant: "The result is 1175. Here's what I found about it..."All of these interactions are saved to memory for future reference.
AI SDK Integration
Uses modern patterns from Vercel AI SDK:
- Built-in tool calling support
- Automatic conversation management
- Better error handling
- Real-time step streaming via
onStepFinish
Known Limitations
| Limitation | Work-around |
|------------|------------|
| n8n UI does not highlight the attached model or tool nodes because only the Agent executes code | Rely on the Agent output or streamed webhook messages for visibility |
| Tool nodes without an explicit Zod/JSON schema (e.g. raw HTTP Request) may receive incorrect argument keys | Wrap such tools in a Custom Code Tool and define a schema, or add few-shot examples |
| Streaming is step-level, not token-level; the n8n node outputs only when the Agent finishes | Use the Intermediate Webhook to push interim messages to a Chat, Slack, etc. |
| The node's dependencies must be available next to ~/.n8n/custom/BetterAylaAgent.node.js | Run npm run deploy-local (copies package.json and installs runtime deps) |
Development
Building from Source
git clone <repository>
cd n8n-nodes-better-ayla-agent
npm install
npm run buildTesting
npm testPublishing
npm run package
npm publishTroubleshooting
Common Issues
- "No language model connected": Ensure you've connected a language model node
- Tool calls not working: Verify your tools are properly configured and connected
- Memory not persisting: Check that your memory node is correctly connected
Debug Information
The node outputs additional debug information:
usage: Token usage statisticsfinishReason: Why the generation stoppedtoolCalls: List of tools that were calledtoolResults: Results from tool executions
Contributing
We welcome contributions! Please see our contributing guidelines for more information.
License
MIT License - see LICENSE file for details.
Support
- Create an issue for bugs or feature requests
- Join the n8n community for general support
- Check the documentation for detailed usage examples
