@neurarank/node-my-agent
v0.1.0
Published
Advanced AI assistant with knowledge management, MCP tool integration, and multi-provider support.
Readme
My Agent Node
Advanced AI assistant with knowledge management, MCP tool integration, and multi-provider support.
Features
🚀 Streaming Architecture
- Real-time updates during execution
- Progressive output streaming
- Live progress tracking
🧠 Knowledge Management
- Short-term Memory: Stores recent interactions (last 20 entries)
- Long-term Memory: Stores important facts and knowledge (last 100 entries)
- Automatic context injection into system prompts
- Manual long-term memory saving
🔧 MCP Protocol Integration
- Tool calling via Model Context Protocol
- Support for Pieces and Flows as tools
- Built-in tools (mark as complete)
🤖 Multi-Provider Support
- OpenAI: GPT-3.5, GPT-4, and more
- Anthropic: Claude models
- Google: Gemini models
- Hugging Face: Popular open-source models (requires Hugging Face token)
- Uses @huggingface/inference library
- Supports streaming chat completion
- Models: Llama 2, DialoGPT, Flan-T5, GPT-2, and more
🔄 Multi-Step Iteration
- Configurable max steps (default: 20)
- Automatic step counting
- Stop condition handling
Usage
Basic Usage
- Select AI Model: Choose from available providers
- Enter Prompt: Describe your task
- Configure Tools: Add MCP tools if needed
- Set Max Steps: Limit iterations (default: 20)
- Enable Memory: Toggle short-term/long-term memory
Memory Management
Short-term Memory
- Automatically stores recent user prompts and assistant responses
- Used for conversation context
- Limited to last 20 interactions
Long-term Memory
- Manually save important information
- Persists across conversations
- Limited to last 100 entries
- Use "Save to Long-term Memory" checkbox and provide content
Example Workflow
1. User: "What is the capital of France?"
→ Saved to short-term memory
2. Assistant: "The capital of France is Paris."
→ Saved to short-term memory
3. User: "Remember that I prefer Python over JavaScript"
→ Save to long-term memory (manual)
4. User: "What programming language should I use?"
→ Agent uses long-term memory context
→ Responds: "Based on your preferences, you should use Python."Configuration
Props
- Prompt: Task description
- AI Model: Provider and model selection
- MCP Tools: Array of available tools
- Structured Output: Define output schema
- Max Steps: Maximum iterations (default: 20)
- Use Memory: Enable/disable memory (default: true)
- Save to Long-term Memory: Save important info
- Long-term Memory Content: Content to save
- Hugging Face Token: Required for HF models
Memory Storage
Memory is stored in the project scope using the built-in store:
- Short-term:
my_agent_short_term_memory - Long-term:
my_agent_long_term_memory
Limitations
- Short-term memory: 20 entries max
- Long-term memory: 100 entries max
- Hugging Face models require Hugging Face API token (get from HF Settings)
- Memory is project-scoped (shared across flows in same project)
- Hugging Face models may have different capabilities than OpenAI/Anthropic/Google models
Advanced Features
Custom System Prompts
The system prompt automatically includes:
- Current date/time
- Long-term knowledge base
- Recent conversation context (short-term)
- Core directives for tool usage
Tool Integration
- Pieces: Use any piece action as a tool
- Flows: Call other flows as tools
- Built-in: Mark as complete tool
Notes
- Memory persists across flow runs within the same project
- Memory is automatically trimmed to size limits
- Hugging Face support is experimental (may require custom implementation)
- Streaming updates provide real-time feedback
