npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ahzan-agentforge/core

v0.1.3

Published

Production-grade AI agent framework — the Next.js for AI agents

Readme

@ahzan-agentforge/core

Production-grade AI agent framework. Define business logic. The framework handles execution, memory, observability, and reliability.

AgentForge owns the agent loop — LLM calls, tool execution, state checkpointing, crash recovery, cost governance, rollback, and full run tracing. You write tools and prompts. Everything else is handled.

npm License: MIT

Quick Start

npm install @ahzan-agentforge/core zod
import { defineAgent, defineTool, createLLM } from '@ahzan-agentforge/core'
import { z } from 'zod'

const getOrder = defineTool({
  name: 'get_order',
  description: 'Fetch order details by ID',
  input: z.object({ orderId: z.string() }),
  output: z.object({ id: z.string(), status: z.string(), total: z.number() }),
  execute: async ({ orderId }) => {
    return { id: orderId, status: 'shipped', total: 49.99 }
  },
})

const agent = defineAgent({
  name: 'support-agent',
  description: 'Customer support agent',
  tools: [getOrder],
  llm: createLLM({ provider: 'openai', model: 'gpt-4o', maxTokens: 4096 }),
  maxSteps: 15,
  systemPrompt: 'You are a customer support agent. Look up orders and help customers.',
})

const result = await agent.run({ task: 'Order #4521 arrived damaged' })
console.log(result.output) // "I've looked up order #4521..."

Why AgentForge

  • Own the loop, not the LLM. The LLM is a swappable dependency. The framework controls execution, retries, checkpointing, and tracing around every decision.
  • State is sacred. Every step is checkpointed. Kill the process at any point — resume from the last checkpoint with zero data loss.
  • Observability is not optional. Every LLM call, tool execution, token count, and timing is recorded via OpenTelemetry.
  • Framework errors != model errors. When something breaks, you know whether it's your code, the LLM, or a tool.

Features

LLM Providers

createLLM({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' })
createLLM({ provider: 'openai', model: 'gpt-4o' })
createLLM({ provider: 'gemini', model: 'gemini-2.0-flash' })
createLLM({ provider: 'ollama', model: 'llama3.1' })  // local, zero cost

All providers support token-level streaming via agent.stream().

Tool System

  • Zod schema validation on input AND output
  • Configurable retry with exponential/linear/fixed backoff
  • Per-tool timeout enforcement
  • Result caching for idempotent tools
  • Compensating actions for rollback support

Streaming

for await (const event of agent.stream({ task: 'Help customer' })) {
  if (event.type === 'llm_token') process.stdout.write(event.content)
  if (event.type === 'tool_start') console.log(`Calling ${event.toolName}...`)
  if (event.type === 'done') console.log(`\nResult: ${event.result.output}`)
}

State & Recovery

  • InMemoryStateStore — for development and testing
  • RedisStateStore — production checkpointing with ioredis
  • Resume any run from its last checkpoint via runId

Memory

  • InMemoryMemoryStore — development and testing
  • PgVectorMemoryStore — production long-term memory with embedding-based retrieval
  • Auto-capture of run outcomes, errors, and tool results
  • Memory consolidation to manage growth

Cost Governor & Autonomy Policy

  • Per-run token and cost budget limits with built-in model pricing
  • Per-tool allow/deny/escalate rules
  • Escalation thresholds on cost and step count
  • Rate limiting per tool

Multi-Agent Coordination

import { pipeline, parallel, supervisor, debate } from '@ahzan-agentforge/core'

const result = await pipeline([researcher, writer, editor], { task: 'Write a report' })
const result = await supervisor(manager, { researcher, writer }, { task: 'Complete project' })
const result = await parallel([agent1, agent2], { task: 'Analyze data' }, mergeResults)
const result = await debate([optimist, pessimist], { task: 'Evaluate proposal' }, judge)

Observability

  • OpenTelemetry spans for runs, LLM calls, and tool executions
  • Metrics: run duration, token usage, tool call counts, error rates, estimated cost
  • Export to any OTLP-compatible backend (Jaeger, Grafana, Datadog)

Testing

import { createMockLLM, createTestHarness } from '@ahzan-agentforge/core'

const harness = createTestHarness({
  agent: myAgentConfig,
  llm: createMockLLM({
    responses: [
      { toolCalls: [{ name: 'get_order', input: { orderId: '4521' } }] },
      { text: 'Order found and ticket created.' },
    ],
  }),
})

const result = await harness.run({ task: 'Handle order issue' })
expect(result.status).toBe('completed')
expect(result.toolCalls('get_order')).toHaveLength(1)

Pre-built Integrations

Ready-to-use tool factories for common services:

  • HTTPcreateHttpGetTool(), createHttpPostTool()
  • SlackcreateSlackTools()
  • GitHubcreateGitHubTools()
  • GmailcreateGmailTools()
  • StripecreateStripeTools()
  • NotioncreateNotionTools()
  • SupabasecreateSupabaseTools()
  • WooCommercecreateWooCommerceTools()
  • DatabasecreateDatabaseTool() (SQL queries)

Agent Templates

Pre-configured agents you can customize:

import { createCustomerSupportAgent, createResearchAssistant } from '@ahzan-agentforge/core'

const support = createCustomerSupportAgent({ llm, tools: [orderLookup] })
const researcher = createResearchAssistant({ llm, tools: [webSearch] })

Also available: createDataProcessor, createCodeReviewer, createWorkflowAutomator

MCP (Model Context Protocol)

import { MCPClient, MCPServer, toMCPTool, fromMCPTool } from '@ahzan-agentforge/core'

// Expose AgentForge tools as MCP tools
const server = new MCPServer({ tools: [getOrder], name: 'my-agent' })

// Use external MCP tools in your agent
const client = new MCPClient({ serverUrl: 'http://localhost:3001' })

CLI

npx agentforge init my-agent               # scaffold a new project
npx agentforge run src/agent.ts --task "…"  # execute an agent
npx agentforge run src/agent.ts --mock      # run with mock LLM (no API cost)
npx agentforge trace <runId>                # inspect a past run
npx agentforge replay <runId> --diff        # re-execute and compare
npx agentforge dev --agent src/agent.ts     # dev server with hot reload

Configuration

import { defineConfig } from '@ahzan-agentforge/core'

export default defineConfig({
  defaultLLM: {
    provider: 'anthropic',
    model: 'claude-sonnet-4-20250514',
    maxTokens: 4096,
  },
  state: {
    provider: 'redis',
    redis: { url: 'redis://localhost:6379', ttl: 604800 },
  },
  trace: {
    outputDir: '.agentforge/traces',
    retainDays: 30,
  },
})

Requirements

  • Node.js >= 22
  • Redis (optional, for persistent state)
  • PostgreSQL + pgvector (optional, for long-term memory)

License

MIT