konitza
v0.1.5
Published
A Laravel-inspired Node.js AI agent framework with scaffolding and tool management
Downloads
572
Maintainers
Readme
🤖 AI Agent Framework
A Laravel-inspired Node.js framework for building AI agents with automatic tool loading, beautiful chat UI, and easy scaffolding. Build production-ready AI agents in minutes!
✨ Features
- 🚀 Quick Scaffolding - Generate a complete AI agent project with one command
- 🔧 Automatic Tool Loading - Drop tools into the
tools/folder and they're instantly available - 💬 Beautiful Chat UI - Modern, responsive chat interface out of the box
- 🔌 Dual API Support - HTTP REST API and WebSocket support
- ⚙️ Easy Configuration - Simple
.envfile configuration - 🛠️ Tool Generator - CLI command to generate new tool templates
- 📦 OpenAI Integration - Built-in support for GPT models with function calling
🎯 Quick Start
Installation
npm install -g konitzaCreate Your First Agent
# Create a new agent project
konitza new my-awesome-agent
# Navigate to your project
cd my-awesome-agent
# Install dependencies
npm install
# Configure your OpenAI API key
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY
# Start your agent!
npm startYour agent is now running at http://localhost:3000 with a beautiful chat interface! 🎉
🔧 Creating Tools
Tools are automatically loaded from the tools/ directory. Create a new tool with:
konitza tool add weather-checkThis generates a tool template at tools/weather-checker.js. Edit it to implement your logic:
module.exports = {
name: 'weather_checker',
description: 'Checks the weather for a given location',
parameters: {
location: {
type: 'string',
description: 'The city or location to check'
}
},
async execute({ location }) {
// Your implementation here
return {
success: true,
data: {
location,
temperature: 72,
condition: 'Sunny'
}
};
}
};Restart your server, and the tool is available to your AI agent! No configuration needed.
📁 Project Structure
my-agent/
├── index.js # Server entry point
├── config.js # Configuration management
├── .env # Environment variables
├── package.json
├── lib/
│ ├── agent.js # AI agent logic
│ └── tool-loader.js # Automatic tool discovery
├── tools/ # Your tools directory
│ ├── calculator.js
│ ├── get-current-time.js
│ ├── text-analyzer.js
│ └── random-number.js
└── public/
└── index.html # Chat UI⚙️ Configuration
Edit your .env file to customize your agent:
# OpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo-preview
# Server Configuration
PORT=3000
HOST=localhost
# Agent Configuration
AGENT_NAME=My AI Agent
SYSTEM_PROMPT=You are a helpful AI assistant with access to various tools.🛠️ Built-in Example Tools
Your scaffolded project includes several example tools:
Calculator
Performs basic arithmetic operations (add, subtract, multiply, divide)
// Usage in chat: "What is 25 times 4?"Get Current Time
Returns current date and time in various formats and timezones
// Usage in chat: "What time is it in Tokyo?"Text Analyzer
Analyzes text and provides statistics (word count, reading time, etc.)
// Usage in chat: "Analyze this text: [your text here]"Random Number Generator
Generates random numbers within a specified range
// Usage in chat: "Generate 5 random numbers between 1 and 100"🌐 API Endpoints
REST API
POST /api/chat
{
"message": "Hello, what can you do?",
"conversationId": "optional-conversation-id"
}Response:
{
"message": "I can help you with...",
"toolCalls": [...],
"conversationId": "conversation-id"
}GET /api/health
{
"status": "ok",
"agent": "My AI Agent",
"tools": [...]
}WebSocket
Connect to ws://localhost:3000 and send messages:
{
"message": "Your message here",
"conversationId": "optional-id"
}Receive streaming responses with tool execution updates in real-time.
📝 Creating Custom Tools
Tool Structure
Every tool must export an object with:
name(string): Unique identifier for the tooldescription(string): What the tool does (used by the AI)parameters(object): OpenAI function parameters schemaexecute(function): Async function that implements the tool logic
Tool Template
module.exports = {
name: 'tool_name',
description: 'What your tool does',
parameters: {
param1: {
type: 'string',
description: 'Description of param1'
},
param2: {
type: 'number',
description: 'Description of param2'
}
},
async execute({ param1, param2 }) {
try {
// Your tool logic here
const result = doSomething(param1, param2);
return {
success: true,
data: result
};
} catch (error) {
return {
success: false,
error: error.message
};
}
}
};Best Practices
Clear Descriptions: Write clear descriptions for tools and parameters. The AI uses these to decide when to use your tool.
Error Handling: Always wrap your tool logic in try-catch blocks.
Return Format: Return objects with
successboolean and eitherdataorerror.Parameter Types: Use proper parameter types (
string,number,boolean,array,object).Async Operations: Make your execute function async if you need to perform I/O operations.
🎨 Customizing the Chat UI
Edit public/index.html to customize the chat interface. The UI is built with vanilla JavaScript and CSS for easy customization.
🚀 Deployment
Your agent is a standard Node.js application. Deploy it anywhere:
Using PM2
npm install -g pm2
pm2 start index.js --name my-agentUsing Docker
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]Environment Variables
Make sure to set your environment variables on your deployment platform:
OPENAI_API_KEYOPENAI_MODELPORTSYSTEM_PROMPT
🧪 Testing Your Tools
Test your tools in isolation before integrating them:
const tool = require('./tools/my-tool');
(async () => {
const result = await tool.execute({ param1: 'value' });
console.log(result);
})();📚 Advanced Usage
Custom AI Models
You can use different OpenAI models by setting the OPENAI_MODEL environment variable:
OPENAI_MODEL=gpt-4-turbo-preview # Most capable
OPENAI_MODEL=gpt-4 # High capability
OPENAI_MODEL=gpt-3.5-turbo # Fast and cost-effectiveManaging Conversations
Each conversation is identified by a conversationId. The agent maintains conversation history automatically. Clear a conversation by sending a request to a new conversationId.
Tool Execution Flow
- User sends a message
- Agent processes message with OpenAI
- If tools are needed, agent executes them
- Results are sent back to OpenAI for a natural response
- Final response is sent to user
The agent handles multiple tool calls automatically and can execute tools in sequence to accomplish complex tasks.
🤝 Contributing
This framework is designed to be extended! Feel free to:
- Add new built-in tools
- Improve the UI
- Add new features
- Submit pull requests
📄 License
MIT License - feel free to use this in your projects!
🆘 Support
Having issues? Common solutions:
"OPENAI_API_KEY not set"
Make sure you've created a .env file from .env.example and added your API key.
"Tool not loading"
Ensure your tool file:
- Is in the
tools/directory - Has a
.jsextension - Exports a valid tool object
- Has no syntax errors
"Port already in use"
Change the PORT in your .env file to use a different port.
🎓 Examples
Weather Agent
konitza new weather-agent
cd weather-agent
konitza tool add weather-api
# Implement weather API integration in tools/weather-api.js
npm startData Analysis Agent
konitza new data-agent
cd data-agent
konitza tool add csv-reader
konitza tool add data-visualizer
# Implement your data tools
npm startCustomer Support Agent
konitza new support-agent
cd support-agent
konitza tool add ticket-creator
konitza tool add knowledge-search
# Implement your support tools
npm startBuilt with ❤️ for developers who want to build AI agents quickly and efficiently.
Happy Building! 🚀
