@peebles-group/agentlib-js
v2.2.0
Published
A minimal JavaScript library implementing concurrent async agents for illustrating multi-agent systems and other agentic design patterns including recursive ones purely through function calling loops.
Readme
AgentLib
A lightweight Node.js library for building AI agents with LLM providers and MCP (Model Context Protocol) server integration.
Installation
npm install @peebles-group/agentlib-jsTesting
Run npm test to run the test script under tests/test.js.
Quick Start
Set up API keys
# Create .env file OPENAI_API_KEY=your_openai_key GEMINI_API_KEY=your_gemini_keyCreate a new project
mkdir my-agent-project cd my-agent-project npm init -y npm install @peebles-group/agentlib-js dotenv
Features
- Multi-Provider LLM Support: OpenAI, Gemini
- MCP Integration: Browser automation, filesystem, web search, memory
- Tool Calling: Native function execution with type safety
- Structured Output: Zod schema validation
- Agent Orchestration: Multi-step reasoning with tool use
Basic Usage
import { Agent, LLMService } from '@peebles-group/agentlib-js';
import dotenv from 'dotenv';
dotenv.config();
// Initialize LLM service
const llm = new LLMService('openai', process.env.OPENAI_API_KEY);
// Simple agent
const agent = new Agent(llm, {
model: 'gpt-4o-mini'
});
agent.addInput({ role: 'user', content: 'Hello!' });
const response = await agent.run();
console.log(response.output_text);
// Agent with MCP servers (auto-installs packages)
const mcpAgent = new Agent(llm, {
model: 'gpt-4o-mini',
enableMCP: true
});
await mcpAgent.addMCPServer('browser', {
type: 'stdio',
command: 'npx',
args: ['@playwright/mcp@latest']
});Prompt Management
Manage prompts efficiently using the PromptLoader. Support for yml/db/md/json/txt files.
import { PromptLoader } from '@peebles-group/agentlib-js';
// Load prompts from a file
const loader = await PromptLoader.create('./prompts.yml');
/*
prompts.yml
system_instruction: |
Write an essay on {{topic}}.
Make sure to make it {{depth}}.
*/
// Get and format a prompt
const prompt = loader.getPrompt('system_instruction').format({
topic: 'AI Agents',
depth: 'detailed'
});
agent.addInput({ role: 'user', content: prompt });
Structured Outputs
AgentLib supports type-safe structured outputs using Zod schemas for reliable JSON responses.
import { Agent } from '@peebles-group/agentlib-js';
import { z } from 'zod';
import dotenv from 'dotenv';
dotenv.config();
// Define schema with Zod
const ResponseSchema = z.object({
answer: z.string(),
confidence: z.number(),
sources: z.array(z.string())
});
const agent = new Agent('openai', process.env.OPENAI_API_KEY, {
model: 'gpt-4o-mini',
outputSchema: ResponseSchema // Pass Zod object directly
});
agent.addInput({ role: 'user', content: 'What is the capital of France?' });
const result = await agent.run();
// Access structured data from the result
const parsedData = result.output_parsed; // Structured data when schema is used
const text = result.output_text; // Raw text responseKey Points:
- Input/Output Schemas: Pass Zod objects directly to
inputSchema/outputSchema - Raw Text: Access via
result.output_text(when no schema) - Type Safety: Automatic validation and TypeScript support
- Model Support: Works with
gpt-4o-miniandgpt-4omodels
Examples
The repository includes several development examples that demonstrate different features:
examples/simpleAgent/- Basic agent usage with toolsexamples/mcp-example/- Full MCP integration demoexamples/translatorExample/- Multi-agent orchestrationexamples/sqlAgent/- Database operationsexamples/schema-example/- Structured input/output with Zod schemasexamples/rag-example/- Agentic RAG example with mongodb hybrid search
Note: These examples use relative imports for development. In your projects, use the npm package:
// In your project
import { Agent } from '@peebles-group/agentlib-js';
// Instead of (development only)
import { Agent } from './src/Agent.js';API Reference
Agent Constructor
const agent = new Agent(provider, apiKey, options);Parameters:
provider(string): LLM provider name ('openai', 'gemini')apiKey(string): API key for the provideroptions(object): Configuration optionsmodel(string): LLM model name (default: 'gpt-4o-mini')tools(array): Native function toolsenableMCP(boolean): Enable MCP serversinputSchema(Zod object): Input validation schemaoutputSchema(Zod object): Output validation schema
Example:
import { Agent } from '@peebles-group/agentlib-js';
const agent = new Agent('openai', process.env.OPENAI_API_KEY, {
model: 'gpt-4o-mini',
tools: [],
enableMCP: true,
inputSchema: zodSchema,
outputSchema: zodSchema
});LLM Providers
- OpenAI:
gpt-4o-mini,gpt-4o,gpt-3.5-turbo - Gemini:
gemini-2.5-flash-lite
Input format follows OpenAI's message structure:
[{ role: 'user', content: 'Hello' }]LLM Result Format
When calling an LLM, the result object has the following structure:
{
"id": "resp_67ccd2bed1ec8190b14f964abc0542670bb6a6b452d3795b",
"object": "response",
"created_at": 1741476542,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "message",
"id": "msg_67ccd2bf17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "In a peaceful grove beneath a silver moon...",
"annotations": []
}
]
},
{
id: 'fc_0c7a9f052c2a6aec0068fa6e20bca0819abbc24ec38aad74dc',
type: 'function_call',
status: 'completed',
arguments: '{"element":"Our Menu","ref":"e222","doubleClick":false,"button":"left","modifiers":[]}',
call_id: 'call_iBNFPVHDsSH1UUGUIUM5uvCE',
name: 'browser_click'
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 36,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 87,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 123
},
"user": null,
"metadata": {}
}Key Fields:
output_text- The actual response textoutput_parsed- Response ONLY WHEN OUTPUT SCHEMA IS PRESENTusage- Token consumption detailsmodel- The model used for the responsestatus- Response status ("completed", "failed", etc.)
