agentops-core
v1.1.0
Published
AgentOps Core - AI agent framework for JavaScript
Readme
agentops-core is a powerful TypeScript framework for building AI agents with memory, tools, and multi-step workflows. Connect to any AI provider and create sophisticated multi-agent systems where specialized agents work together under supervisor coordination.
Features
- 🤖 Agent Runtime: Define agents with typed roles, tools, memory, and model providers
- 🔄 Workflow Engine: Build multi-step automations declaratively with full type safety
- 👥 Supervisors & Sub-Agents: Create teams of specialized agents under supervisor coordination
- 🛠️ Tool Registry & MCP: Ship Zod-typed tools with lifecycle hooks and Model Context Protocol support
- 🔌 LLM Compatibility: Support for OpenAI, Anthropic, Google, Azure, Groq, and 15+ providers
- 💾 Memory System: Durable memory adapters for conversation history and context persistence
- 🔍 RAG & Retrieval: Built-in support for retrieval-augmented generation
- 🎤 Voice Capabilities: Text-to-speech and speech-to-text integration
- 🛡️ Guardrails: Input/output validation and content policy enforcement
- 📊 Evaluation Framework: Built-in eval suites for testing agent behavior
- 🔐 Type Safety: Full TypeScript support with comprehensive type definitions
Installation
npm install agentops-core ai zodQuick Start
Basic Agent
Create a simple AI agent with tools and memory:
import { Agent } from "agentops-core";
import { openai } from "@ai-sdk/openai";
const agent = new Agent({
name: "assistant",
instructions: "You are a helpful assistant that can check weather and answer questions",
model: openai("gpt-4o-mini"),
tools: [
{
name: "get_weather",
description: "Get the current weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
// Your weather API logic here
return { temperature: 72, condition: "sunny", location };
},
},
],
});
// Generate a response
const result = await agent.generateText("What's the weather in San Francisco?");
console.log(result.text);With Memory
Add persistent conversation memory:
import { Agent, Memory } from "agentops-core";
import { InMemoryStorageAdapter } from "agentops-core/memory";
const memory = new Memory({
storage: new InMemoryStorageAdapter(),
});
const agent = new Agent({
name: "assistant",
instructions: "You remember our conversation history",
model: openai("gpt-4o-mini"),
memory,
});
// Conversations persist across calls
await agent.generateText("My name is John", {
userId: "user-123",
conversationId: "conv-1",
});
await agent.generateText("What's my name?", {
userId: "user-123",
conversationId: "conv-1",
}); // Will remember: "Your name is John"Workflow Engine
Create multi-step workflows with conditional logic:
import { createWorkflowChain } from "agentops-core";
import { z } from "zod";
const workflow = createWorkflowChain({
id: "data-pipeline",
name: "Data Processing Pipeline",
input: z.object({ data: z.string() }),
result: z.object({ processed: z.boolean(), result: z.string() }),
})
.andThen({
id: "validate",
execute: async ({ data }) => {
return { ...data, valid: data.data.length > 0 };
},
})
.andWhen({
id: "process",
condition: async ({ data }) => data.valid,
execute: async ({ data }) => {
return { processed: true, result: data.data.toUpperCase() };
},
});
const result = await workflow.run({ data: "hello world" });
console.log(result.data); // { processed: true, result: "HELLO WORLD" }Sub-Agents
Create specialized agents that work together:
const researchAgent = new Agent({
name: "researcher",
instructions: "You research topics and gather information",
model: openai("gpt-4o"),
});
const writerAgent = new Agent({
name: "writer",
instructions: "You write articles based on research",
model: openai("gpt-4o"),
subAgents: [
{
agent: researchAgent,
name: "research",
description: "Research a topic",
},
],
});
// Writer can call researcher automatically
await writerAgent.generateText("Write an article about AI agents");Supported AI Providers
Connect to any LLM provider through the Vercel AI SDK:
- OpenAI: GPT-4, GPT-3.5, o1, o3-mini
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Google: Gemini 2.0, Gemini 1.5 Pro
- Azure OpenAI: Enterprise deployments
- Amazon Bedrock: Claude, Llama, Mistral
- Groq: Fast inference
- Ollama: Local models
- Mistral AI, Cohere, DeepInfra, Together AI, and more
Advanced Features
Guardrails
Add input/output validation:
const agent = new Agent({
name: "safe-agent",
model: openai("gpt-4o"),
inputGuardrails: [
{
id: "no-pii",
check: async (input) => {
if (containsPII(input)) {
throw new Error("Input contains PII");
}
},
},
],
outputGuardrails: [
{
id: "content-filter",
check: async (output) => {
if (isInappropriate(output)) {
throw new Error("Inappropriate content detected");
}
},
},
],
});Tool Routing
Dynamically select tools based on user intent:
import { createToolRouter, createEmbeddingToolRouterStrategy } from "agentops-core";
const router = createToolRouter({
name: "tool_router",
description: "Selects the best tools for the task",
embedding: "text-embedding-3-small",
topK: 3,
});
const agent = new Agent({
name: "smart-agent",
model: openai("gpt-4o"),
tools: [router],
toolRouting: {
pool: ["search_web", "analyze_data", "send_email", "create_document"],
},
});Evaluation Framework
Test and score agent behavior:
const agent = new Agent({
name: "customer-support",
model: openai("gpt-4o"),
eval: {
scorers: [
{
id: "helpfulness",
type: "llm",
criteria: "Is the response helpful and accurate?",
rubric: "Score 1-5",
},
],
sampling: {
rate: 0.1, // Evaluate 10% of requests
mode: "random",
},
},
});API Reference
Agent
class Agent {
constructor(options: AgentOptions);
// Generate text response
async generateText(
input: string | Message[],
options?: GenerateTextOptions
): Promise<GenerateTextResult>;
// Stream text response
async streamText(
input: string | Message[],
options?: StreamTextOptions
): Promise<StreamTextResult>;
// Generate structured object
async generateObject<T>(
input: string | Message[],
options?: GenerateObjectOptions
): Promise<GenerateObjectResult<T>>;
// Stream structured object
async streamObject<T>(
input: string | Message[],
options?: StreamObjectOptions
): Promise<StreamObjectResult<T>>;
}Memory
class Memory {
constructor(options: MemoryConfig);
// Message operations
async getMessages(userId: string, conversationId: string): Promise<Message[]>;
async addMessage(message: Message, userId: string, conversationId: string): Promise<void>;
// Conversation operations
async createConversation(input: CreateConversationInput): Promise<Conversation>;
async listConversations(userId: string): Promise<Conversation[]>;
// Vector search (if configured)
async searchVectors(query: string, options?: SearchOptions): Promise<SearchResult[]>;
}Workflow
function createWorkflowChain<INPUT, RESULT>(config: WorkflowConfig): WorkflowChain;
interface WorkflowChain {
andThen(step: WorkflowStep): WorkflowChain;
andWhen(condition: Condition, step: WorkflowStep): WorkflowChain;
andAgent(agent: Agent, options?: AgentStepOptions): WorkflowChain;
andAll(steps: WorkflowStep[]): WorkflowChain;
andBranch(branches: Branch[]): WorkflowChain;
run(input: INPUT): Promise<WorkflowResult<RESULT>>;
}Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
Licensed under the MIT License. Copyright © 2026-present prashant0707.
Support
- GitHub Issues: Report bugs or request features
- GitHub Discussions: Ask questions and share ideas
