npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@agentrun-ai/core

v0.7.0

Published

Core runtime: orchestrator, RBAC, catalog, channels, platform abstraction

Readme

@agentrun-ai/core

Core runtime for AgentRun: orchestrator, RBAC, agent runners, platform abstraction, and catalog system.

Zero cloud dependencies — all infrastructure concerns are pluggable interfaces. @agentrun-ai/aws and @agentrun-ai/gcp provide production-ready implementations.

Installation

npm install @agentrun-ai/core

Quick Start

import { setProviderRegistrar, bootstrapPlatform, processRequest } from "@agentrun-ai/core";
import { registerGcpProviders } from "@agentrun-ai/gcp";
import { SlackChannelAdapter } from "@agentrun-ai/channel-slack";

setProviderRegistrar(registerGcpProviders);
await bootstrapPlatform();

const adapter = new SlackChannelAdapter();
await processRequest(adapter, {
    userId: "U12345",
    channelId: "C12345",
    text: "show cluster status",
});

Core Concepts

Platform Registry

AgentRun's dependency injection system. Register provider implementations at startup:

import { setProviderRegistrar, bootstrapPlatform } from "@agentrun-ai/core";

// AWS
import { registerAwsProviders } from "@agentrun-ai/aws";
setProviderRegistrar(registerAwsProviders);

// or GCP
import { registerGcpProviders } from "@agentrun-ai/gcp";
setProviderRegistrar(registerGcpProviders);

await bootstrapPlatform();

Model Router (v0.4.0)

Automatically select optimal LLM models based on query complexity and role permissions:

import { selectModel, classifyComplexity } from "@agentrun-ai/core";

// Complexity classification (zero-cost heuristics)
const complexity = classifyComplexity("show cluster status");
// → "simple"

// RBAC-gated model selection
const models = {
    fast: { provider: "vertex", modelId: "gemini-1.5-flash", capability: "fast", ... },
    pro: { provider: "vertex", modelId: "gemini-2.0-pro", capability: "advanced", ... },
};

const selection = selectModel("analyze performance bottlenecks", models, ["fast", "pro"]);
// → { name: "pro", model: {...}, reason: "complex query → advanced model (pro)" }

Complexity Tiers:

  • simple → "list status", "show prs", facts lookups → fast models
  • moderate → multi-step synthesis
  • complex → architecture design, impact analysis → advanced models

Generic Agent Runner (v0.4.0)

Model-agnostic function calling with any LLM provider (Gemini, GPT, Ollama, etc.):

import { processGenericQuery } from "@agentrun-ai/core";
import { createOpenAICaller } from "@agentrun-ai/core";

const openaiCaller = createOpenAICaller({
    baseUrl: "https://api.openai.com",
    defaultModel: "gpt-4o",
    resolveToken: async (userId) => {
        // Return per-user token from your token store
        return await tokenStore.getToken(userId, "openai");
    },
});

const result = await processGenericQuery(
    "show cluster status",
    "U12345",
    "google",
    {
        callLlm: openaiCaller,
        executeTool: myToolExecutor,
        evaluatorConfig: {
            enabled: true,
            criteria: [
                { name: "factual_accuracy", weight: 0.4 },
                { name: "completeness", weight: 0.3 },
            ],
        },
        resourcesOverride: [
            {
                type: "project",
                name: "my-team-project",
                description: "Team's main project",
                defaultParameter: "project_id"
            }
        ],
        systemPromptAppend: "Additional context: this user is part of the platform team."
    }
);

OpenAI-Compatible Caller (v0.4.0)

Generic LLM caller for any OpenAI-compatible API:

import { createOpenAICaller } from "@agentrun-ai/core";

const caller = createOpenAICaller({
    baseUrl: "https://your-gateway.example.com",
    defaultModel: "gemini-2.0-flash",
    resolveToken: async (userId) => tokenStore.getToken(userId, "gateway"),
    timeoutMs: 60000,
});

const response = await caller({
    systemPrompt: "You are a helpful assistant.",
    contents: [{ role: "user", parts: [{ text: "hello" }] }],
    tools: toolDeclarations,
    userId: "U12345",
});

Works with:

  • OpenAI API
  • Self-hosted gateways
  • Local LLM servers (Ollama, vLLM)
  • Any OpenAI-compatible endpoint

Per-User/Team Context Override (v0.4.0)

Override resource context per user or team without modifying platform config:

const config: GenericAgentConfig = {
    // ... other config
    resourcesOverride: [
        {
            type: "gitlab",
            name: "team-alpha/service",
            description: "Team Alpha's microservice",
            defaultParameter: "project_id"
        },
        {
            type: "jira",
            name: "TEAM",
            description: "Team's Jira project",
            defaultParameter: "project_key"
        }
    ],
    systemPromptAppend: "Additional context: user is contractor, read-only access."
};

Use cases:

  • Multi-tenant SaaS: each customer sees their own resources
  • Team-specific tooling: different defaults per squad
  • Role-based context: append instructions based on permissions
  • Dynamic resource routing: resolve resources from user metadata

The LLM receives instructions like:

## Tool Parameter Defaults
- When calling gitlab-* tools, ALWAYS use project_id="team-alpha/service"
- When calling jira-* tools, ALWAYS use project_key="TEAM"

Architecture

| Layer | Responsibility | |-------|-----------------| | Catalog | Tool/workflow/skill/KB registry and routing | | Identity | User/role resolution from channel sources | | RBAC | Role-based access filtering to tools and use-cases | | Execution | direct (deterministic), agent (Claude SDK), generic (model-agnostic) | | Evaluation | Optional response quality scoring pre-delivery | | Platform | Pluggable provider interfaces (LLM, session, secrets, storage) |

Provider Interfaces

Every infrastructure concern is a TypeScript interface:

| Interface | Purpose | |-----------|---------| | LlmProvider | LLM completions and summarization | | SessionStore | Conversation history persistence | | UsageStore | Token and invocation tracking | | ManifestStore | Pack manifest storage | | QueueProvider | Async message dispatch | | BootstrapSecretProvider | Secret retrieval at startup | | EmbeddingProvider | Text embeddings for RAG | | VectorStore | Vector similarity search | | KnowledgeBaseProvider | Managed RAG retrieval |

Implement these to support a new cloud provider.

Configuration

Define tools, workflows, roles, and models in a manifest:

spec:
  models:
    fast:
      provider: vertex-ai
      modelId: gemini-1.5-flash
      capability: fast
      inputCostPer1kTokens: 0.00075
      outputCostPer1kTokens: 0.003

    advanced:
      provider: vertex-ai
      modelId: gemini-2.0-pro
      capability: advanced
      inputCostPer1kTokens: 0.01
      outputCostPer1kTokens: 0.03

  roles:
    engineer:
      models: [fast, advanced]
    analyst:
      models: [fast]

Exports

Model Router:

  • selectModel(query, models, allowedNames)ModelSelection
  • classifyComplexity(query)QueryComplexity
  • getModelsForRole(models, allowedNames) → model list
  • Types: ModelSelection, QueryComplexity, ModelDef, ModelCapability

Generic Runner:

  • processGenericQuery(query, userId, source, config)AgentResult
  • createOpenAICaller(config)callLlm function
  • Types: GenericAgentConfig, OpenAICallerConfig

Core:

  • bootstrapPlatform() — Initialize platform from config
  • setProviderRegistrar(fn) — Register provider implementations
  • processRequest(adapter, event) — Process channel event
  • Types: PlatformConfig, PlatformRegistry

See Also