@riligar/agents-sdk
v2.2.0
Published
RiLiGar Agents SDK - JavaScript framework for building intelligent AI agents with LLM support via OpenRouter, custom tools, and MCP integration.
Downloads
29
Maintainers
Readme
RiLiGar Agents SDK
Stop wrestling with complex AI integrations. RiLiGar Agents SDK gives you enterprise-grade AI agents in just 5 lines of code.
✨ What used to take weeks, now takes minutes. Built on top of industry giants like OpenAI, Anthropic, Google, and Meta through OpenRouter's unified API, this framework packs thousands of hours of engineering into an elegantly simple package.
🎯 One SDK. All AI Providers. Infinite Possibilities.
- 🤖 Smart Agents with custom personalities and instructions
- 🔧 Custom Tools that agents can execute seamlessly
- 🛡️ Built-in Guardrails for safe, controlled AI interactions
- 🔐 HMAC Security for enterprise-grade API authentication
- 📡 Real-time Streaming for responsive user experiences
- 🔗 MCP Protocol for enterprise-grade integrations
- 📝 Intelligent Logging with contextual insights
- 🌐 HTTP Endpoints to expose agents as REST APIs
From startup MVPs to enterprise solutions - RiLiGar Agents SDK scales with you.
🚀 Installation
npm install @riligar/agents-sdk⚡ Quick Example
import { Agent, Endpoint, Logger, model, tool } from '@riligar/agents-sdk';
// 1. Configure your API Key
const OPENROUTER_API_KEY = process.env.OPENROUTER_API_KEY;
// 2. Create logger and model
const logger = new Logger({ level: 'info' });
const llm = model.create({
apiKey: OPENROUTER_API_KEY,
logger: logger,
});
// 3. Define tools (optional)
const tools = await tool.build(
[
{
type: 'local',
id: 'greeting',
description: 'Returns a personalized greeting',
parameters: {
type: 'object',
properties: {
name: { type: 'string', description: 'Person name' },
},
required: ['name'],
},
executeFn: async ({ name }) => {
return { message: `Hello, ${name}! 👋` };
},
},
],
{ logger }
);
// 4. Create your agent
const agent = new Agent({
name: 'Assistant',
instructions: 'You are a friendly assistant who can greet people.',
model: llm,
tools: tools,
logger: logger,
});
// 5. Execute tasks
const result = agent.run('Greet John');
const text = await result.text;
console.log(text); // Agent response
// 6. Expose as HTTP API (optional)
const endpoint = new Endpoint(agent, { port: 3001 });
await endpoint.start(); // Agent available at http://localhost:3001
// 7. Use from any JavaScript client
const response = await fetch('http://localhost:3001/process', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ task: 'Greet Sarah' }),
});
const data = await response.json();
console.log(data.result.text); // "Hello, Sarah! 👋"🎯 Key Features
| Feature | Description | Example |
| ----------------- | ------------------------------------------- | --------------------------------------------- |
| 🤖 Agents | Intelligent agents with custom instructions | new Agent({ name, instructions, model }) |
| 🔧 Tools | Local tools and MCP integration | tool.build([{ type: 'local', executeFn }]) |
| 📝 Logging | Contextual and configurable logging system | new Logger({ level: 'info' }) |
| 🛡️ Guardrails | Input and output validation | guardrails: { input: [...], output: [...] } |
| 🔐 Security | HMAC authentication for secure APIs | security.generateHMACAuth(id, key, ...) |
| 🔄 Handoffs | Transfer between agents | { type: 'agent', agent: otherAgent } |
| 📡 Streaming | Real-time responses | result.textStream |
| 🌐 HTTP API | Expose agents as REST endpoints | new Endpoint(agent, { port: 3001 }) |
🌐 Supported Models
Via OpenRouter (configure OPENROUTER_API_KEY):
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini)
- Meta (Llama)
- Perplexity, Cohere, etc.
const llm = model.create({
modelName: 'openai/gpt-4', // or 'anthropic/claude-3', 'google/gemini-pro'
apiKey: process.env.OPENROUTER_API_KEY,
temperature: 0.7,
});🔐 HMAC Security
Secure your agent endpoints with enterprise-grade HMAC authentication. Perfect for production APIs that need to verify client identity and request integrity.
🛡️ Secure Server Implementation
import { Agent, Endpoint, Logger, model, security } from '@riligar/agents-sdk';
// Create your agent
const agent = new Agent({
name: 'SecureAgent',
instructions: 'You are a secure AI assistant.',
model: model.create({ apiKey: process.env.OPENROUTER_API_KEY }),
logger: new Logger({ level: 'info' })
});
// Create endpoint with HMAC security
const endpoint = new Endpoint(agent, {
port: 3001,
// Option 1: Client-specific keys (recommended)
hmacClients: {
'frontend-app': process.env.FRONTEND_SECRET,
'mobile-app': process.env.MOBILE_SECRET,
'admin-panel': process.env.ADMIN_SECRET
},
// Option 2: Master secret (simpler)
hmacSecret: process.env.HMAC_MASTER_SECRET,
hmacTolerance: 300 // 5 minutes tolerance
});
await endpoint.start();
console.log('🔒 Secure agent running on http://localhost:3001');👤 Authenticated Client Implementation
import { security } from '@riligar/agents-sdk';
class SecureAgentClient {
constructor(clientId, secretKey, baseUrl) {
this.clientId = clientId;
this.secretKey = secretKey;
this.baseUrl = baseUrl;
}
async processTask(task) {
const body = { task };
// Generate HMAC authentication
const authHeader = security.generateHMACAuth(
this.clientId,
this.secretKey,
'POST',
'/process',
body
);
const response = await fetch(`${this.baseUrl}/process`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': authHeader
},
body: JSON.stringify(body)
});
if (!response.ok) {
throw new Error(`Authentication failed: ${response.status}`);
}
return response.json();
}
async checkHealth() {
const authHeader = security.generateHMACAuth(
this.clientId,
this.secretKey,
'GET',
'/health'
);
const response = await fetch(`${this.baseUrl}/health`, {
headers: { 'Authorization': authHeader }
});
return response.json();
}
}
// Usage
const client = new SecureAgentClient(
'frontend-app',
process.env.FRONTEND_SECRET,
'https://your-agent-api.com'
);
const result = await client.processTask('Analyze customer data');
console.log(result.result.text);🔑 Security Features
- 🛡️ HMAC-SHA256: Industry-standard message authentication
- ⏱️ Timestamp Protection: Prevents replay attacks
- 👥 Multi-Client Support: Individual keys per client
- 🔐 Timing-Safe Validation: Resistant to timing attacks
- 🚫 Request Integrity: Detects message tampering
- ⚙️ Configurable Tolerance: Flexible timestamp validation
📋 Environment Variables
# Server configuration
OPENROUTER_API_KEY=your-openrouter-api-key
HMAC_MASTER_SECRET=your-master-secret-key
FRONTEND_SECRET=frontend-client-secret-key
MOBILE_SECRET=mobile-client-secret-key
ADMIN_SECRET=admin-client-secret-key
# Client configuration
AGENT_CLIENT_ID=frontend-app
AGENT_SECRET_KEY=frontend-client-secret-key
AGENT_BASE_URL=https://your-agent-api.com📚 Practical Examples
| Example | File | Description |
| -------------- | ----------------------- | ----------------------- |
| Basic | examples/hello.js | Simple agent with tools |
| Security | examples/security.js | HMAC authentication demo |
| Guardrails | examples/guardrail.js | Input/output validation |
| Handoffs | examples/handoffs.js | Transfer between agents |
| MCP | examples/mcp/ | MCP server integration |
Run any example:
export OPENROUTER_API_KEY="your-key-here"
node examples/hello.js🔗 MCP Integration
Connect automatically to MCP servers:
const tools = await tool.build(
[
{
type: 'remote',
serverUrl: 'http://localhost:3000', // MCP Server
},
],
{ logger }
);🔗 Resources
- OpenRouter - Unified API for multiple LLM providers
- OpenAI | SDK
- Anthropic | SDK
- Google AI | SDK
- Meta Llama
- Model Context Protocol
📄 License
Apache-2.0
