@rodgerai/core
v2.0.0
Published
Headless AI agent SDK - Build agents in minutes, not days
Downloads
839
Maintainers
Readme
@rodger/core
Build production-ready AI agents in minutes, not days.
Features
- Simple API - Create agents with <20 lines of code
- Multi-Provider - OpenAI, Anthropic, and more
- Tool System - Type-safe tools with automatic validation
- Knowledge & RAG - Integrate Zep, Ragie, LlamaParse, Firecrawl
- Guardrails - Input/output validation for safe agents
- Lifecycle Hooks - Observe and control agent execution
- Streaming Support - Real-time response streaming
- TypeScript-First - Fully typed with excellent IDE support
Installation
npm install @rodger/core
# or
pnpm add @rodger/coreQuick Start
import { createAgent } from '@rodger/core';
const agent = createAgent({
name: 'Support Agent',
llm: { provider: 'openai', model: 'gpt-4o' }
});
const response = await agent.chat('Hello!');
console.log(response.text);Core Concepts
Agent Configuration
Create agents with flexible configuration:
import { createAgent } from '@rodger/core';
import { z } from 'zod';
const agent = createAgent({
name: 'My Agent',
// LLM configuration
llm: {
provider: 'openai',
model: 'gpt-4o',
temperature: 0.7
},
// System instructions
systemPrompt: 'You are a helpful assistant.',
// Custom tools (optional)
tools: {
calculator: {
name: 'calculator',
description: 'Performs arithmetic',
parameters: z.object({
a: z.number(),
b: z.number()
}),
execute: async ({ a, b }) => a + b
}
},
// Guardrails (optional)
guardrails: {
input: [(input) => input.length < 1000 || 'Input too long'],
output: [(output) => !output.includes('unsafe') || 'Unsafe content']
}
});Streaming Responses
Stream responses in real-time:
// Stream text chunks
for await (const chunk of agent.stream('Tell me a story')) {
process.stdout.write(chunk);
}
// Or with full control
const stream = agent.streamWithEvents('Tell me a story');
for await (const event of stream) {
if (event.type === 'text-delta') {
console.log(event.textDelta);
} else if (event.type === 'tool-call') {
console.log('Tool called:', event.toolName);
}
}Tool Integration
Define custom tools with type safety:
import { createAgent } from '@rodger/core';
import { z } from 'zod';
const weatherTool = {
name: 'getWeather',
description: 'Get weather for a location',
parameters: z.object({
location: z.string(),
unit: z.enum(['celsius', 'fahrenheit']).optional()
}),
execute: async ({ location, unit = 'celsius' }) => {
// Fetch weather data
const data = await fetch(`/api/weather?location=${location}`);
return data.json();
}
};
const agent = createAgent({
name: 'Weather Agent',
llm: { provider: 'openai', model: 'gpt-4o' },
tools: { getWeather: weatherTool }
});
// Agent automatically calls tool when needed
const response = await agent.chat('What is the weather in Paris?');Constants and Defaults
All default values and limits can be overridden via configuration:
import { createAgent, KNOWLEDGE_DEFAULTS } from '@rodger/core';
// Use exported constants
console.log(KNOWLEDGE_DEFAULTS.TOP_K); // 3
// Override defaults
const agent = createAgent({
defaults: {
knowledge: { topK: 5 }
}
});See CONSTANTS.md for full documentation.
Lifecycle Hooks
Monitor and control agent execution:
import { createAgent, AgentLifecycleHooks } from '@rodger/core';
const hooks: AgentLifecycleHooks = {
onBeforeRun: async (input, context) => {
console.log('Starting run:', input);
},
onAfterRun: async (output, context) => {
console.log('Completed run:', output);
},
onBeforeToolCall: async (toolName, args, context) => {
console.log('Calling tool:', toolName);
return true; // Return false to block execution
},
onAfterToolCall: async (toolName, result, context) => {
console.log('Tool completed:', toolName);
},
onToolApproval: async (toolName, args, context) => {
// Custom approval logic
return await showConfirmationDialog(toolName, args);
},
onStreamEvent: async (event, context) => {
// Handle streaming events
if (event.type === 'chunk') {
console.log(event.delta);
}
},
onError: async (error, context) => {
console.error('Agent error:', error);
}
};
const agent = createAgent({
name: 'Monitored Agent',
llm: { provider: 'openai', model: 'gpt-4o' },
hooks
});See docs/LIFECYCLE-HOOKS.md for complete documentation.
Knowledge Integration
Enhance agents with memory and RAG:
import { createAgent } from '@rodger/core';
import { ZepKnowledge } from '@rodger/core/knowledge';
// Conversation memory with Zep
const agent = createAgent({
name: 'Memory Agent',
llm: { provider: 'openai', model: 'gpt-4o' },
knowledge: new ZepKnowledge({
apiKey: process.env.ZEP_API_KEY,
collectionName: 'conversations'
})
});
// Agent remembers previous messages
await agent.chat('My name is Alice', { sessionId: 'user-123' });
await agent.chat('What is my name?', { sessionId: 'user-123' });
// Response: "Your name is Alice"Guardrails
Add validation for safe agent behavior:
const agent = createAgent({
name: 'Safe Agent',
llm: { provider: 'openai', model: 'gpt-4o' },
guardrails: {
// Input validation
input: [
(input) => input.length < 5000 || 'Input too long',
(input) => !input.includes('hack') || 'Unsafe content detected'
],
// Output validation
output: [
(output) => output.length < 10000 || 'Response too long',
(output) => !containsPII(output) || 'PII detected in output'
]
}
});Supported Providers
LLM Providers
- OpenAI - GPT-4, GPT-4o, GPT-3.5
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus/Haiku
Knowledge Providers
- Zep - Conversation memory and context
- Ragie - Document RAG and semantic search
- Firecrawl - Web scraping and crawling
- LlamaParse - Document parsing and extraction
API Reference
createAgent(config)
Creates a new agent instance.
Parameters:
name(string) - Agent identifierllm(LLMConfig) - LLM provider configurationsystemPrompt(string, optional) - System instructionstools(Record<string, Tool>, optional) - Custom toolsknowledge(Knowledge, optional) - Knowledge providerguardrails(Guardrails, optional) - Input/output validationhooks(AgentLifecycleHooks, optional) - Lifecycle hooks
Returns: Agent instance
agent.chat(message, options?)
Send a message and get a response.
Parameters:
message(string) - User messageoptions.sessionId(string, optional) - Session identifier for memoryoptions.userId(string, optional) - User identifieroptions.metadata(Record<string, unknown>, optional) - Custom metadata
Returns: Promise<AgentResponse>
agent.stream(message, options?)
Stream response chunks.
Parameters: Same as chat()
Returns: AsyncIterable<string> - Text chunks
agent.streamWithEvents(message, options?)
Stream with full event control.
Returns: AsyncIterable<StreamEvent> - Stream events
TypeScript Support
Full TypeScript support with type inference:
import type {
Agent,
AgentConfig,
AgentResponse,
Tool,
AgentLifecycleHooks,
HookContext,
StreamEvent
} from '@rodger/core';Related Packages
- @rodger/ui - React components for agent UIs
- @rodger/tools - Pre-built tool library
- @rodger/widgets - UI widgets for tools
- @rodger/cli - CLI for testing agents
Examples
Check out the examples directory for complete examples:
- Basic Chat - Simple chat agent
- Tool Agent - Agent with custom tools
- Knowledge Agent - Agent with RAG
- Loan Assistant - Production example
Documentation
Full documentation: docs.rodger.ai
License
MIT
