@framers/agentos
v0.1.3
Published
Modular AgentOS orchestration library
Downloads
9
Readme
AgentOS
Modular orchestration runtime for adaptive AI agents
Website · Documentation · npm · GitHub
Table of Contents
Overview
AgentOS is a TypeScript-first orchestration runtime for building adaptive, emergent AI agents. Unlike traditional agent frameworks that treat agents as stateless functions, AgentOS introduces Generalized Mind Instances (GMIs) — context-aware entities that learn, evolve, and maintain coherent personalities across interactions.
npm install @framers/agentosFeatures
◆ Adaptive Intelligence
- GMI Architecture — Persistent agent identities with working memory
- Dynamic Personas — Contextual personality adaptation
- Multi-model Support — OpenAI, Anthropic, local models
◆ Streaming-First Runtime
- Token-level streaming — Real-time response delivery
- Async generators — Native TypeScript patterns
- WebSocket & SSE — Multiple transport protocols
◆ Tool Orchestration
- Permission management — Fine-grained access control
- Dynamic registration — Runtime tool discovery
- Guardrails — Safety constraints and validation
◆ RAG & Memory
- Vector storage — Semantic memory retrieval
- SQL adapters — SQLite, PostgreSQL support
- Context optimization — Automatic window management
◆ Multi-Agent Coordination
- Agency system — Agent hierarchies and teams
- Message bus — Inter-agent communication
- Handoffs — Context transfer between agents
◆ Human-in-the-Loop
- Approval workflows — High-risk action gates
- Clarification requests — Ambiguity resolution
- Escalation handling — Human takeover paths
Installation
# npm
npm install @framers/agentos
# pnpm
pnpm add @framers/agentos
# yarn
yarn add @framers/agentosRequirements: Node.js 18+ · TypeScript 5.0+
Quick Start
import { AgentOS } from '@framers/agentos';
// Initialize
const agent = new AgentOS();
await agent.initialize({
llmProvider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o'
}
});
// Process requests with streaming
for await (const chunk of agent.processRequest({
message: 'Help me analyze this data',
context: { userId: 'user-123' }
})) {
if (chunk.type === 'content') {
process.stdout.write(chunk.content);
}
}With Tools
import { AgentOS } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o'
},
tools: [{
name: 'get_weather',
description: 'Get current weather for a city',
parameters: {
type: 'object',
properties: {
city: { type: 'string' }
},
required: ['city']
},
execute: async ({ city }) => {
const res = await fetch(`https://api.weather.com/${city}`);
return res.json();
}
}]
});
// Tools are called automatically when the model decides to use them
for await (const chunk of agent.processRequest({ message: 'Weather in Tokyo?' })) {
if (chunk.type === 'tool_call') console.log('Calling:', chunk.tool);
if (chunk.type === 'content') process.stdout.write(chunk.content);
}Multiple Providers
// OpenRouter for multi-model access
await agent.initialize({
llmProvider: {
provider: 'openrouter',
apiKey: process.env.OPENROUTER_API_KEY,
model: 'anthropic/claude-3.5-sonnet'
}
});
// Local Ollama
await agent.initialize({
llmProvider: {
provider: 'ollama',
baseUrl: 'http://localhost:11434',
model: 'llama3'
}
});With RAG Memory
import { AgentOS } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o'
},
memory: {
vectorStore: 'memory', // or 'sqlite', 'postgres'
embeddingModel: 'text-embedding-3-small'
}
});
// Ingest documents
await agent.memory.ingest([
{ content: 'AgentOS supports streaming responses...', metadata: { source: 'docs' } },
{ content: 'GMIs maintain context across sessions...', metadata: { source: 'docs' } }
]);
// Queries automatically retrieve relevant context
for await (const chunk of agent.processRequest({
message: 'How does streaming work?'
})) {
process.stdout.write(chunk.content);
}Architecture
┌─────────────────────────────────────────────────────────────────┐
│ AgentOS Runtime │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Request │ │ Prompt │ │ Streaming │ │
│ │ Router │→ │ Engine │→ │ Manager │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ↓ ↓ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ GMI Manager │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Working │ │ Context │ │ Persona │ │Learning │ │ │
│ │ │ Memory │ │ Manager │ │ Overlay │ │ Module │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ ↓ ↓ ↓ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Tool │ │ RAG │ │ Planning │ │
│ │Orchestrator │ │ Memory │ │ Engine │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ↓ ↓ ↓ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ LLM Provider Manager │ │
│ │ OpenAI │ Anthropic │ Azure │ Local Models │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘Documentation
Core Concepts
| Guide | Description | |-------|-------------| | Architecture | System design and component overview | | Guardrails | Safety controls and mid-stream intervention | | Extensions | Extension system and standards | | Ecosystem | Related repos and packages |
Agent Features
| Guide | Description | |-------|-------------| | Planning Engine | Multi-step task planning and execution | | Human-in-the-Loop | Approval workflows and oversight | | Agent Communication | Inter-agent messaging patterns | | Self-Building Agents | Recursive agent construction | | Structured Output | JSON schema validation | | Evaluation Framework | Testing and quality assurance |
Storage & Memory
| Guide | Description | |-------|-------------| | RAG Configuration | Memory and retrieval setup | | SQL Storage | SQLite/PostgreSQL setup | | Client-Side Storage | Browser storage options |
Operations
| Guide | Description | |-------|-------------| | Cost Optimization | Token usage and cost management | | Platform Support | Supported platforms and environments | | Releasing | How to publish new versions | | API Reference | TypeDoc-generated API docs |
Examples
Structured Data Extraction
import { AgentOS, StructuredOutputManager } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' }
});
// Extract typed data from unstructured text
const structured = new StructuredOutputManager({ llmProviderManager: agent.llmProviderManager });
const contact = await structured.generate({
prompt: 'Extract: "Meeting with Sarah Chen ([email protected]) on Jan 15 re: Series A"',
schema: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string', format: 'email' },
date: { type: 'string' },
topic: { type: 'string' }
},
required: ['name', 'email']
},
schemaName: 'ContactInfo'
});
// → { name: 'Sarah Chen', email: '[email protected]', date: 'Jan 15', topic: 'Series A' }Human-in-the-Loop Approvals
import { HumanInteractionManager } from '@framers/agentos';
const hitl = new HumanInteractionManager({ defaultTimeoutMs: 300000 });
// Gate high-risk operations with human approval
const decision = await hitl.requestApproval({
action: {
type: 'database_mutation',
description: 'Archive 50K inactive accounts older than 2 years',
severity: 'high',
metadata: { affectedRows: 50000, table: 'users' }
},
alternatives: [
{ action: 'soft_delete', description: 'Mark as inactive instead of archiving' },
{ action: 'export_first', description: 'Export to CSV before archiving' }
]
});
if (decision.approved) {
await executeArchive();
} else if (decision.selectedAlternative) {
await executeAlternative(decision.selectedAlternative);
}Autonomous Task Planning
import { AgentOS, PlanningEngine } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' }
});
const planner = new PlanningEngine({ llmProvider: agent.llmProviderManager, strategy: 'react' });
// Decompose complex goals into executable steps with ReAct reasoning
const plan = await planner.generatePlan({
goal: 'Migrate authentication from sessions to JWT',
constraints: ['Zero downtime', 'Backwards compatible for 30 days', 'Audit logging required'],
context: { currentStack: 'Express + Redis sessions', userCount: '50K' }
});
for await (const step of planner.executePlan(plan.id)) {
console.log(`[${step.status}] ${step.action}`);
if (step.requiresHumanApproval) {
const approved = await promptUser(step.description);
if (!approved) break;
}
}Multi-Agent Collaboration
import { AgentOS, AgencyRegistry, AgentCommunicationBus } from '@framers/agentos';
// Create specialized agents
const researcher = new AgentOS();
await researcher.initialize({ llmProvider: llmConfig, persona: 'Research analyst' });
const writer = new AgentOS();
await writer.initialize({ llmProvider: llmConfig, persona: 'Technical writer' });
// Register in agency with shared communication
const agency = new AgencyRegistry();
const bus = new AgentCommunicationBus();
agency.register('researcher', researcher, { bus });
agency.register('writer', writer, { bus });
// Agents coordinate via message passing
bus.on('research:complete', async ({ findings }) => {
await writer.processRequest({
message: `Write documentation based on: ${JSON.stringify(findings)}`
});
});
await researcher.processRequest({ message: 'Analyze the authentication module' });Guardrails: Mid-Stream Decision Override
import { AgentOS } from '@framers/agentos';
import { CostCeilingGuardrail } from './guardrails/CostCeilingGuardrail';
const costGuard = new CostCeilingGuardrail({
maxCostUsd: 0.05, // 5 cents per request
inputTokenPricePer1k: 0.0001,
outputTokenPricePer1k: 0.0002,
budgetExceededText: 'Response exceeded cost ceiling. Please refine your request.'
});
const agent = new AgentOS();
await agent.initialize({
llmProvider: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
guardrailService: costGuard
});
// Agent generates expensive response → guardrail intercepts → substitutes budget message
// Agents can "change their mind" before delivery based on cost, content policy, or quality checksSee Guardrails Usage Guide for complete documentation.
Non-Streaming Response
import { AgentOS } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' }
});
// Collect full response without streaming
const chunks = [];
for await (const chunk of agent.processRequest({ message: 'Explain OAuth 2.0 briefly' })) {
if (chunk.type === 'content') {
chunks.push(chunk.content);
}
}
const fullResponse = chunks.join('');Mood-Adaptive Responses
import { AgentOS, GMIMood } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' },
persona: {
name: 'Support Agent',
moodAdaptation: {
enabled: true,
defaultMood: GMIMood.EMPATHETIC,
allowedMoods: [GMIMood.EMPATHETIC, GMIMood.FOCUSED, GMIMood.ANALYTICAL],
sensitivityFactor: 0.7,
// Mood-specific prompt modifiers
moodPrompts: {
[GMIMood.EMPATHETIC]: 'Prioritize understanding and emotional support.',
[GMIMood.FRUSTRATED]: 'Acknowledge difficulty, offer step-by-step guidance.',
[GMIMood.ANALYTICAL]: 'Provide detailed technical explanations with examples.'
}
}
}
});
// Agent automatically adapts tone based on conversation context
for await (const chunk of agent.processRequest({
message: 'This is so frustrating, nothing works!'
})) {
// Response adapts with empathetic tone, mood shifts to EMPATHETIC
}Contextual Prompt Adaptation
import { AgentOS } from '@framers/agentos';
const agent = new AgentOS();
await agent.initialize({
llmProvider: llmConfig,
persona: {
name: 'Adaptive Tutor',
// Dynamic prompt elements injected based on runtime context
contextualPromptElements: [
{
id: 'beginner-guidance',
type: 'SYSTEM_INSTRUCTION_ADDON',
content: 'Explain concepts simply, avoid jargon, use analogies.',
criteria: { userSkillLevel: ['novice', 'beginner'] },
priority: 10
},
{
id: 'expert-mode',
type: 'SYSTEM_INSTRUCTION_ADDON',
content: 'Assume deep technical knowledge, be concise, skip basics.',
criteria: { userSkillLevel: ['expert', 'advanced'] },
priority: 10
},
{
id: 'debugging-context',
type: 'FEW_SHOT_EXAMPLE',
content: { role: 'assistant', content: 'Let\'s trace through step by step...' },
criteria: { taskHint: ['debugging', 'troubleshooting'] }
}
],
// Meta-prompts for self-reflection and planning
metaPrompts: [
{
id: 'mid-conversation-check',
trigger: 'every_n_turns',
triggerConfig: { n: 5 },
prompt: 'Assess: Is the user making progress? Should I adjust my approach?'
}
]
}
});
// Prompts automatically adapt based on user context and task
await agent.updateUserContext({ skillLevel: 'expert' });
for await (const chunk of agent.processRequest({ message: 'Explain monads' })) {
// Uses expert-mode prompt element, skips beginner explanations
}Roadmap
| Version | Status | Features | |---------|--------|----------| | 0.1 | ✓ | Core runtime, GMI, streaming, tools, RAG | | 0.2 | → | Knowledge graphs, marketplace, visual planning | | 0.3 | ○ | Distributed agents, edge deployment | | 1.0 | ○ | Production hardening, enterprise features |
See CHANGELOG.md for release history.
Contributing
We welcome contributions. See our Contributing Guide for details.
# Clone and setup
git clone https://github.com/framersai/agentos.git
cd agentos
pnpm install
# Development
pnpm run build # Build the package
pnpm run test # Run tests
pnpm run docs # Generate documentationCommit Convention
We use Conventional Commits:
feat: New features → minor version bump
fix: Bug fixes → patch version bump
docs: Documentation only
BREAKING CHANGE: → major version bumpLicense
Built by Frame.dev · @framersai
