npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

create-web-ai-service

v1.0.7

Published

CLI scaffolder for creating new web-ai-service projects

Downloads

45

Readme

Web AI Service

A TypeScript-based workflow engine that creates dynamic API endpoints from YAML workflow definitions. Build powerful AI-powered APIs with LLM calls, custom code execution, and data transformations—all without writing server boilerplate.

Features

  • 🚀 YAML-Based Configuration - Define API endpoints declaratively
  • 🤖 Multi-LLM Support - Built-in support for Gemini, OpenAI, Anthropic, and Grok
  • 📝 Custom Code Nodes - Execute TypeScript functions in your workflows
  • Parallel Execution - Run multiple nodes concurrently with error strategies
  • 🔄 Data Transformation - Reduce, split, and map data with JSONPath
  • Input Validation - JSON Schema validation on request inputs
  • 🎯 Type-Safe - Full TypeScript support with strict typing
  • 🌐 Auto-Routing - Endpoint folders automatically become API routes
  • 🔌 Plugin System - Extensible with Supabase and custom plugins

Table of Contents

  1. Quick Start
  2. Project Structure
  3. Creating Endpoints
  4. Node Types
  5. Using Plugins
  6. Configuration
  7. Commands Reference
  8. Troubleshooting
  9. Documentation

Quick Start

Option 1: Create a New Project (Recommended)

The easiest way to start is with the scaffolder:

npx create-web-ai-service

You'll be prompted to:

  1. Enter your project name - e.g., my-api
  2. Select plugins - Choose from available plugins like Supabase

Or use command-line arguments for non-interactive setup:

npx create-web-ai-service my-api --plugins supabase

After scaffolding:

cd my-api
cp .env.example .env      # Configure your API keys
npm run dev               # Start the server

Your API is now running at http://localhost:3000!

Option 2: Install as a Dependency

Add to an existing project:

npm install web-ai-service

Option 3: Global Installation

npm install -g web-ai-service
web-ai-service   # Run from any directory with a src/endpoints folder

Project Structure

When you create a new project, you'll get this structure:

my-api/
├── src/
│   ├── endpoints/              # Your API endpoints
│   │   └── hello/              # Example: GET /hello
│   │       ├── GET.yaml        # Workflow definition
│   │       ├── codes/          # TypeScript code nodes
│   │       │   └── format-greeting.ts
│   │       └── prompts/        # LLM system prompts
│   │           └── greeting-system.txt
│   │
│   └── plugins/                # Shared code modules
│       └── supabase.ts         # (if selected during setup)
│
├── .env                        # Your API keys (gitignored)
├── .env.example                # Template for environment variables
├── package.json
└── tsconfig.json

Key Concepts

| Concept | Description | |---------|-------------| | Endpoint | A folder in src/endpoints/ that becomes an API route | | Workflow | A YAML file (e.g., POST.yaml, GET.yaml) defining the processing pipeline | | Stage | A sequential step in the workflow containing one or more nodes | | Node | An individual processing unit (LLM call, code execution, etc.) |

How Routing Works

| Folder Path | HTTP Method | API Route | |-------------|-------------|-----------| | src/endpoints/hello/GET.yaml | GET | /hello | | src/endpoints/summarize/POST.yaml | POST | /summarize | | src/endpoints/users/profile/GET.yaml | GET | /users/profile |


Creating Endpoints

Basic Example: Text Summarization

Create a POST endpoint at /summarize:

1. Create the folder structure:

mkdir -p src/endpoints/summarize/{codes,prompts}

2. Create the system prompt (src/endpoints/summarize/prompts/system.txt):

You are a concise summarization assistant. Summarize the provided text clearly in 2-3 paragraphs.

3. Create the workflow (src/endpoints/summarize/POST.yaml):

version: "1.0"

stages:
  - name: main
    nodes:
      summarize:
        type: llm
        input: $input.text
        provider: gemini
        model: gemini-2.0-flash-lite
        temperature: 0.3
        maxTokens: 1024
        systemMessages:
          - file: system.txt

4. Test it:

curl -X POST http://localhost:3000/summarize \
  -H "Content-Type: application/json" \
  -d '{"text": "Long text to summarize..."}'

Adding Input Validation with Code Nodes

Create a code node to validate inputs before processing:

src/endpoints/summarize/codes/validate.ts:

import type { NodeOutput } from '@workflow/types';

interface SummarizeInput {
    text?: string;
}

export default async function(input: unknown): Promise<NodeOutput> {
    const body = input as SummarizeInput;
    
    if (!body.text || typeof body.text !== 'string') {
        throw new Error('Missing required field: text');
    }
    
    if (body.text.length < 10) {
        throw new Error('Text must be at least 10 characters');
    }
    
    return { type: 'string', value: body.text };
}

Updated workflow with validation stage:

version: "1.0"

stages:
  - name: validate
    nodes:
      check_input:
        type: code
        input: $input
        file: validate.ts

  - name: summarize
    nodes:
      summary:
        type: llm
        input: validate.check_input    # Reference previous node output
        provider: gemini
        model: gemini-2.0-flash-lite
        systemMessages:
          - file: system.txt

Multi-Stage Workflow Example

Chain multiple processing stages:

version: "1.0"

stages:
  - name: extract
    nodes:
      parse_data:
        type: code
        input: $input
        file: extract-data.ts

  - name: analyze
    nodes:
      analyze_content:
        type: llm
        input: extract.parse_data
        provider: gemini
        model: gemini-2.0-flash-lite
        systemMessages:
          - file: analyzer-prompt.txt

  - name: format
    nodes:
      format_response:
        type: code
        input: analyze.analyze_content
        file: format-output.ts

Parallel Node Execution

Run multiple LLM calls simultaneously within a stage:

stages:
  - name: parallel_analysis
    nodes:
      sentiment:
        type: llm
        input: $input.text
        provider: gemini
        model: gemini-2.0-flash-lite
        systemMessages:
          - file: sentiment-prompt.txt
      
      keywords:
        type: llm
        input: $input.text
        provider: openai
        model: gpt-4o-mini
        systemMessages:
          - file: keywords-prompt.txt

  - name: combine
    nodes:
      merge:
        type: reduce
        inputs:
          - parallel_analysis.sentiment
          - parallel_analysis.keywords
        mapping:
          sentiment: $.0
          keywords: $.1

Node Types

LLM Node

Call an LLM provider:

my_llm_node:
  type: llm
  input: $input.text              # or reference: stageName.nodeName
  provider: gemini                # gemini | openai | anthropic | grok
  model: gemini-2.0-flash-lite
  temperature: 0.7                # Optional (0.0-1.0)
  maxTokens: 1024                 # Optional
  systemMessages:
    - file: prompt.txt
      cache: true                 # Cache for performance

Supported Providers & Models:

| Provider | Example Models | |----------|----------------| | gemini | gemini-2.0-flash-lite, gemini-2.0-flash, gemini-1.5-pro | | openai | gpt-4o, gpt-4o-mini, gpt-4-turbo | | anthropic | claude-3-5-sonnet-latest, claude-3-haiku-20240307 | | grok | grok-2, grok-2-mini |

Code Node

Execute custom TypeScript:

my_code_node:
  type: code
  input: $input
  file: my-processor.ts

The TypeScript file must export a default async function:

import type { NodeOutput } from '@workflow/types';

export default async function(input: unknown): Promise<NodeOutput> {
    // Your logic here
    return { 
        type: 'json',    // 'string' | 'json' | 'number' | 'boolean' | 'array'
        value: { processed: true } 
    };
}

Reduce Node

Combine multiple node outputs:

merge_results:
  type: reduce
  inputs:
    - stageName.node1
    - stageName.node2
  mapping:
    firstResult: $.0
    secondResult: $.1

Split Node

Divide output into named parts:

split_data:
  type: split
  input: stageName.nodeName
  mapping:
    header: $.header
    body: $.content
    footer: $.footer

Passthrough Node

Pass input directly to output:

forward:
  type: passthrough
  input: $input

Using Plugins

Supabase Plugin

If you selected Supabase during project setup, you can use it in code nodes:

import { supabase } from '@code-plugins/supabase.js';
import type { NodeOutput } from '@workflow/types';

export default async function(input: unknown): Promise<NodeOutput> {
    const { data, error } = await supabase
        .from('articles')
        .select('*')
        .limit(10);
    
    if (error) {
        throw new Error(`Database error: ${error.message}`);
    }
    
    return { type: 'json', value: data };
}

Configure in .env:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key  # Optional

Creating Custom Plugins

Add files to src/plugins/ and import via @code-plugins/*:

// src/plugins/my-helper.ts
export function formatDate(date: Date): string {
    return date.toISOString().split('T')[0];
}

// In any code node:
import { formatDate } from '@code-plugins/my-helper.js';

Configuration

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | PORT | 3000 | Server port | | LOG_LEVEL | info | Logging level (debug, info, warn, error) | | LLM_TIMEOUT_MS | 30000 | LLM request timeout |

LLM Provider API Keys

You need at least one provider configured:

| Variable | Provider | |----------|----------| | GEMINI_API_KEY | Google Gemini | | OPENAI_API_KEY | OpenAI | | ANTHROPIC_API_KEY | Anthropic Claude | | GROK_API_KEY | xAI Grok |

Plugin-Specific Variables

| Variable | Plugin | |----------|--------| | SUPABASE_URL | Supabase | | SUPABASE_ANON_KEY | Supabase | | SUPABASE_SERVICE_KEY | Supabase (optional) |


Commands Reference

| Command | Description | |---------|-------------| | npm run dev | Start development server with hot reload | | npm run build | Compile TypeScript to JavaScript | | npm start | Start production server | | npm run validate | Validate all workflows | | npm run create-endpoint | Scaffold a new endpoint interactively | | npm run scan-deps | Scan and install code node dependencies | | npm run lint | Run ESLint | | npm run format | Format code with Prettier |


Troubleshooting

| Error | Solution | |-------|----------| | "Provider not found" | Check provider is valid and API key is set in .env | | "Code node file not found" | Verify file exists in codes/ folder with correct filename | | "Cannot find module '@workflow/types'" | Run npm run build or restart TypeScript server | | LLM Timeout | Increase LLM_TIMEOUT_MS in .env or use a faster model | | "SUPABASE_URL required" | Add Supabase credentials to .env |


Documentation

For more detailed guides, see the docs/ folder:

License

ISC