npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mhingston5/atlas

v1.1.0

Published

**Local-first personal AI gateway with built-in quality verification**

Readme

Atlas

Local-first personal AI gateway with built-in quality verification

Atlas transforms AI from a black box into a systematic quality system. It runs AI-powered workflows that produce verified artifacts—not just outputs, but durable, traceable work products you can trust, search, and build upon.

Tests Version

Why Atlas

  • Quality-first: Every artifact is verified against explicit criteria (ISC)
  • Traceable: Full provenance from sources → workflows → artifacts
  • Self-improving: Systematic reflection and learning from every execution
  • Local & Private: Your data stays on your machine
  • Composable: Workflows chain together: source → entity → workflow → artifact → workflow

Quick Start

# Install dependencies (already done if you cloned)
bun install

# Start the gateway
bun run dev

The server starts on http://localhost:3000 using the mock LLM provider by default (no API keys required).

Core Concepts

  • Artifacts: Durable outputs with automatic quality verification
  • Entities/Events: Source-ingested records with full change history
  • Workflows: AI or deterministic jobs that generate verified artifacts
  • Routing: Profile-based LLM selection with fallbacks and effort-based budgets
  • ISC (Ideal State Criteria): Explicit, testable quality criteria for every artifact
  • Reflection: Auto-generated learning from every workflow execution

Common Workflows

  • Brainstorming: brainstorm.v1
  • Scratchpad synthesis: scratchpad.v1
  • Weekly digest: digest.weekly.v1
  • Curation (promote/merge/tag/dedupe/reconcile): curate.artifacts.v1
  • Heartbeat (periodic check-in): heartbeat.v1
  • Skills inventory: skills.inventory.v1

Full workflow docs and examples: docs/

Configuration

Environment Variables (common)

| Variable | Description | Default | |----------|-------------|---------| | PORT | HTTP server port | 3000 | | ATLAS_DB_PATH | SQLite database path | data/atlas.db | | ATLAS_LLM_PROVIDER | LLM provider preset or custom | mock | | ATLAS_LLM_PROVIDER_FALLBACK | Fallback when provider unavailable | error | | ATLAS_HARNESS_ENABLED | Enable harness runtime | false | | ATLAS_REQUIRE_APPROVAL_BY_DEFAULT | Require approval for all workflows unless they explicitly succeed/fail | false | | ATLAS_MEMORY_PATHS | Comma-separated memory file paths | MEMORY.md,memory | | ATLAS_OPENAI_API_ENABLED | Enable OpenAI-compatible chat API | true |

Full reference: docs/configuration.md

LLM Providers

Atlas supports any Vercel AI SDK provider. Install only the provider you need, then set ATLAS_LLM_PROVIDER.

Examples:

# Mock (default)
ATLAS_LLM_PROVIDER=mock bun run dev

# OpenAI
bun add @ai-sdk/openai
export ATLAS_LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-...
bun run dev

# Custom (any AI SDK provider)
bun add @ai-sdk/google
export ATLAS_LLM_PROVIDER=custom
export ATLAS_LLM_PACKAGE=@ai-sdk/google
export ATLAS_LLM_FACTORY=createGoogleGenerativeAI
export ATLAS_LLM_MODEL=gemini-pro
bun run dev

Routing profiles and custom provider selection: docs/provider-routing.md

API Quick Examples

# Health
curl http://localhost:3000/health

# Sync sources
curl -X POST http://localhost:3000/sync

# Create a job
curl -X POST http://localhost:3000/jobs \
  -H "Content-Type: application/json" \
  -d '{"workflow_id":"brainstorm.v1","input":{"topic":"productivity"}}'

OpenAI-Compatible API

Atlas provides an OpenAI-compatible chat completions API for interacting with workflows:

# Enable the API (default: true)
export ATLAS_OPENAI_API_ENABLED=true

# List available models
curl http://localhost:3000/v1/models

# Chat with Atlas
curl -X POST http://localhost:3000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "atlas-scratchpad",
    "messages": [{"role": "user", "content": "What should I focus on?"}]
  }'

# Or use with OpenAI SDK
# base_url: http://localhost:3000/v1

Full API docs: docs/openai-api.md

Full API examples: docs/api-examples.md and docs/curation.md

Architecture Snapshot

Sources → Entities/Events → Workflows → Artifacts → Other Workflows → Sinks

Versioning overview: docs/VERSIONING.md

Quality Features (PAI Integration)

Atlas includes systematic quality management through Ideal State Criteria (ISC)—a methodology for defining and verifying what "good" means for every artifact.

How It Works

  1. Define Criteria: Each artifact type has explicit quality criteria (e.g., "Summaries must have 3-5 key points")
  2. Automatic Verification: Artifacts are verified before emission using CLI, Grep, or Custom verifiers
  3. Fail-Closed: CRITICAL failures block artifact emission until fixed
  4. Learn: Systematic Q1/Q2/Q3 reflection improves future executions

Effort-Based Quality

Control quality by choosing effort level:

| Level | Budget | Quality | Use Case | |-------|--------|---------|----------| | INSTANT | <10s | Basic | Quick answers | | FAST | <1min | Standard | Simple tasks | | STANDARD | <2min | High (default) | Daily work | | EXTENDED | <8min | Very High | Important docs | | COMPREHENSIVE | <120min | Maximum | Deep investigations |

// gateway.routing.json
{
  "profiles": {
    "balanced": {
      "effortLevel": "STANDARD",
      "providers": ["openai"],
      "models": ["gpt-4o"]
    }
  }
}

Example: Verified Summaries

// Summary criteria ensure quality
export const summaryISC: ISCDefinition = {
  artifactType: "summary.note.v1",
  idealCriteria: [
    { id: "ISC-SUM-001", criterion: "Captures 3-5 key points",
      priority: "CRITICAL", verify: { type: "CUSTOM", description: "Count key points" } },
    { id: "ISC-SUM-002", criterion: "Has source attributions",
      priority: "CRITICAL", verify: { type: "GREP", pattern: "\\[.*?\\]" } },
    { id: "ISC-SUM-003", criterion: "100-300 words",
      priority: "IMPORTANT", verify: { type: "CLI", command: "wc -w" } }
  ],
  antiCriteria: [
    { id: "ISC-A-SUM-001", criterion: "No hallucinated facts",
      priority: "CRITICAL", verify: { type: "CUSTOM", description: "Cross-reference sources" } }
  ]
};

Results: Summaries that provably meet criteria, not just "look good."

Persistent Documentation (PRDs)

Every artifact gets a PRD—requirements stored as markdown with YAML frontmatter:

  • Intent: What problem this solves
  • Constraints: What must be true
  • Decisions: Why we chose this approach
  • Iteration log: How we got here
# View PRD for any artifact
curl http://localhost:3000/api/v1/artifacts/{id}/prd

Systematic Learning

After each STANDARD+ execution, Atlas generates reflection:

  • Q1: What would I do differently?
  • Q2: What would a smarter workflow do?
  • Q3: What would a smarter Atlas do?

Query reflections to identify patterns:

curl "http://localhost:3000/api/v1/reflections?workflow_id=brainstorm.v1"

Adding ISC to Your Workflows

See docs/pai-integration.md for full guide. Quick start:

export const myWorkflow: WorkflowPlugin = {
  id: "my.workflow.v1",
  isc: myWorkflowISC,  // Attach criteria
  async run(ctx, input, jobId) {
    // Verification happens automatically on emitArtifact()
    ctx.emitArtifact({ type: "my.artifact", content_md: result });
  }
};

Documentation

Getting Started:

  • docs/README.md - Overview and concepts
  • docs/getting-started.md - Step-by-step setup
  • docs/configuration.md - Environment variables and routing

Building Workflows:

  • docs/workflow-authoring.md - Creating custom workflows
  • docs/pai-integration.md - Adding quality criteria (ISC)
  • docs/curation.md - Artifact management and deduplication

APIs:

  • docs/openai-api.md - OpenAI-compatible chat API
  • docs/api-examples.md - API usage examples

Architecture:

  • docs/ARCHITECTURE.md - System design
  • docs/VERSIONING.md - Version compatibility
  • docs/alignment-checklist.md - Safety considerations

What Makes Atlas Different

vs. ChatGPT/Claude Web UI:

  • Atlas produces durable, verifiable artifacts you can search and reuse
  • Traceable: Every output has provenance (sources → workflow → criteria → reflection)
  • Systematic: Not ad-hoc; every artifact is verified against explicit criteria

vs. Other AI Workflow Tools:

  • Quality-first: Built-in verification prevents "garbage in, garbage out"
  • Self-improving: Systematic reflection learns from every execution
  • Local-first: Your data never leaves your machine
  • Deterministic: Same inputs → same verification → reproducible outputs

vs. Traditional Pipelines:

  • AI-native: Built for LLMs, not retrofitted
  • Flexible: Criteria can use LLMs for subjective evaluation
  • Composable: Workflows chain together naturally
  • Human-in-loop: Checkpoints for approval, not autonomous agents

License

Private project