web-ai-service
v1.0.13
Published
TypeScript-based Web AI Service that creates configurable AI-powered endpoints from YAML definitions
Maintainers
Readme
Web AI Service
A TypeScript-based workflow engine that creates dynamic API endpoints from YAML workflow definitions. Build powerful AI-powered APIs with LLM calls, custom code execution, and data transformations—all without writing server boilerplate.
Features
- 🚀 YAML-Based Configuration - Define API endpoints declaratively
- 🤖 Multi-LLM Support - Built-in support for Gemini, OpenAI, Anthropic, and Grok
- 📝 Custom Code Nodes - Execute TypeScript functions in your workflows
- ⚡ Parallel Execution - Run multiple nodes concurrently with error strategies
- 🔄 Data Transformation - Reduce, split, and map data with JSONPath
- ✅ Input Validation - JSON Schema validation on request inputs
- 🎯 Type-Safe - Full TypeScript support with strict typing
- 🌐 Auto-Routing - Endpoint folders automatically become API routes
- 🔌 Plugin System - Extensible with Supabase and custom plugins
Table of Contents
- Quick Start
- Project Structure
- Creating Endpoints
- Node Types
- Using Plugins
- Configuration
- Commands Reference
- Troubleshooting
- Documentation
Quick Start
Option 1: Create a New Project (Recommended)
The easiest way to start is with the scaffolder:
npx create-web-ai-serviceYou'll be prompted to:
- Enter your project name - e.g.,
my-api - Select plugins - Choose from available plugins like Supabase
Or use command-line arguments for non-interactive setup:
npx create-web-ai-service my-api --plugins supabaseAfter scaffolding:
cd my-api
cp .env.example .env # Configure your API keys
npm run dev # Start the serverYour API is now running at http://localhost:3000!
Option 2: Install as a Dependency
Add to an existing project:
npm install web-ai-serviceOption 3: Global Installation
npm install -g web-ai-service
web-ai-service # Run from any directory with a src/endpoints folderProject Structure
When you create a new project, you'll get this structure:
my-api/
├── src/
│ ├── endpoints/ # Your API endpoints
│ │ └── hello/ # Example: GET /hello
│ │ ├── GET.yaml # Workflow definition
│ │ ├── codes/ # TypeScript code nodes
│ │ │ └── format-greeting.ts
│ │ └── prompts/ # LLM system prompts
│ │ └── greeting-system.txt
│ │
│ └── plugins/ # Shared code modules
│ └── supabase.ts # (if selected during setup)
│
├── .env # Your API keys (gitignored)
├── .env.example # Template for environment variables
├── package.json
└── tsconfig.jsonKey Concepts
| Concept | Description |
|---------|-------------|
| Endpoint | A folder in src/endpoints/ that becomes an API route |
| Workflow | A YAML file (e.g., POST.yaml, GET.yaml) defining the processing pipeline |
| Stage | A sequential step in the workflow containing one or more nodes |
| Node | An individual processing unit (LLM call, code execution, etc.) |
How Routing Works
| Folder Path | HTTP Method | API Route |
|-------------|-------------|-----------|
| src/endpoints/hello/GET.yaml | GET | /hello |
| src/endpoints/summarize/POST.yaml | POST | /summarize |
| src/endpoints/users/profile/GET.yaml | GET | /users/profile |
Creating Endpoints
Basic Example: Text Summarization
Create a POST endpoint at /summarize:
1. Create the folder structure:
mkdir -p src/endpoints/summarize/{codes,prompts}2. Create the system prompt (src/endpoints/summarize/prompts/system.txt):
You are a concise summarization assistant. Summarize the provided text clearly in 2-3 paragraphs.3. Create the workflow (src/endpoints/summarize/POST.yaml):
version: "1.0"
stages:
- name: main
nodes:
summarize:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
temperature: 0.3
maxTokens: 1024
systemMessages:
- file: system.txt4. Test it:
curl -X POST http://localhost:3000/summarize \
-H "Content-Type: application/json" \
-d '{"text": "Long text to summarize..."}'Adding Input Validation with Code Nodes
Create a code node to validate inputs before processing:
src/endpoints/summarize/codes/validate.ts:
import type { NodeOutput } from '@workflow/types';
interface SummarizeInput {
text?: string;
}
export default async function(input: unknown): Promise<NodeOutput> {
const body = input as SummarizeInput;
if (!body.text || typeof body.text !== 'string') {
throw new Error('Missing required field: text');
}
if (body.text.length < 10) {
throw new Error('Text must be at least 10 characters');
}
return { type: 'string', value: body.text };
}Updated workflow with validation stage:
version: "1.0"
stages:
- name: validate
nodes:
check_input:
type: code
input: $input
file: validate.ts
- name: summarize
nodes:
summary:
type: llm
input: validate.check_input # Reference previous node output
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: system.txtMulti-Stage Workflow Example
Chain multiple processing stages:
version: "1.0"
stages:
- name: extract
nodes:
parse_data:
type: code
input: $input
file: extract-data.ts
- name: analyze
nodes:
analyze_content:
type: llm
input: extract.parse_data
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: analyzer-prompt.txt
- name: format
nodes:
format_response:
type: code
input: analyze.analyze_content
file: format-output.tsParallel Node Execution
Run multiple LLM calls simultaneously within a stage:
stages:
- name: parallel_analysis
nodes:
sentiment:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: sentiment-prompt.txt
keywords:
type: llm
input: $input.text
provider: openai
model: gpt-4o-mini
systemMessages:
- file: keywords-prompt.txt
- name: combine
nodes:
merge:
type: reduce
inputs:
- parallel_analysis.sentiment
- parallel_analysis.keywords
mapping:
sentiment: $.0
keywords: $.1Node Types
All nodes share these common properties:
- type (required) - The node type:
llm,code,reduce,split, orpassthrough - input (required for most) - The input source:
$input,$input.field,stageName.nodeName
LLM Node
Calls an LLM provider with a prompt.
Required Properties:
my_llm_node:
type: llm
input: $input # Input source
provider: gemini # Provider name: gemini | openai | anthropic | grok
model: gemini-2.0-flash-lite # Model identifierOptional Properties:
temperature: 0.7 # Default: 1.0. Controls randomness (0.0-1.0)
maxTokens: 1024 # Default: provider default. Max output tokens
systemMessages: # System prompts (optional)
- file: prompt.txt # Load from file
cache: true # Enable caching (default: false)
- text: "Direct prompt" # Or use inline text
config: # Provider-specific config (optional)
topP: 0.9
topK: 40Supported Providers & Models:
| Provider | Example Models | Notes |
|----------|----------------|-------|
| gemini | gemini-2.0-flash-lite, gemini-2.0-flash, gemini-1.5-pro | Fast, cost-effective |
| openai | gpt-4o, gpt-4o-mini, gpt-4-turbo | High quality |
| anthropic | claude-3-5-sonnet-latest, claude-3-haiku-20240307 | Long context |
| grok | grok-2, grok-2-mini | xAI models |
Using LLM References (Alternative):
Define reusable LLM configurations:
llm:
my-summarizer:
provider: gemini
model: gemini-2.0-flash-lite
temperature: 0.3
nodes:
summarize:
type: llm
input: $input
llmRef: my-summarizer # Reference the config
systemMessages:
- file: prompt.txtCode Node
Executes a custom TypeScript function.
Required Properties:
my_code_node:
type: code
input: $input
file: my-processor.ts # Relative to endpoint's codes/ folderTypeScript Function Signature:
Your code file must export a default async function:
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise<NodeOutput> {
// Your logic here
const processed = /* ... */;
return {
type: 'json', // 'string' | 'json' | 'number' | 'boolean' | 'array'
value: processed
};
}Notes:
- The
inputparameter is the unwrapped value from the previous node - Must return a
NodeOutputobject withtypeandvalue - Can import from
@code-plugins/*for shared code - Can use any npm packages (run
npm run scan-depsto auto-install)
Reduce Node
Combines multiple node outputs into a single JSON object.
Required Properties:
merge_results:
type: reduce
inputs: # Array of node references
- stageName.node1
- stageName.node2
mapping: # JSONPath mappings
firstResult: $.0
secondResult: $.1
nested:
data: $.0.someFieldHow it Works:
- Takes outputs from multiple nodes specified in
inputs - Uses JSONPath expressions in
mappingto extract values $.0refers to first input,$.1to second input, etc.- Returns a single
{ type: 'json', value: {...} }object
Example:
If node1 outputs { value: { count: 10 } } and node2 outputs { value: { total: 100 } }:
mapping:
count: $.0.count # Gets 10 from first input
total: $.1.total # Gets 100 from second input
# Result: { count: 10, total: 100 }Split Node
Divides a single output into multiple named outputs.
Required Properties:
split_data:
type: split
input: stageName.nodeName
mapping: # JSONPath expressions for each output
header: $.metadata.header
body: $.content
footer: $.metadata.footerHow it Works:
- Takes a single input (usually JSON)
- Extracts multiple values using JSONPath
- Creates named outputs accessible as
nodeId.outputName
Example:
Input: { metadata: { header: 'Title' }, content: 'Body text' }
split_data:
type: split
input: previous.node
mapping:
title: $.metadata.header # Accessible as split_data.title
text: $.content # Accessible as split_data.textLater nodes can reference:
another_node:
type: code
input: split_data.title # Gets 'Title'Passthrough Node
Passes input directly to output unchanged (useful for routing).
Required Properties:
forward:
type: passthrough
input: $inputNotes:
- No transformation applied
- Preserves the input type
- Useful for conditional routing or stage organization
Input References
All nodes (except reduce) use the input property to specify data source:
| Reference Pattern | Description | Example |
|-------------------|-------------|---------|
| $input | Full request body | input: $input |
| $input.field | Specific field from request | input: $input.text |
| $input.nested.field | Nested field access | input: $input.user.name |
| stageName.nodeName | Output from another node | input: extract.parser |
| nodeName.outputName | Split node output | input: splitter.header |
Workflow Structure
Every workflow must follow these rules:
Single-Stage Workflows:
version: "1.0"
stages:
- name: main # Must be named 'main'
nodes:
my_node: # Must have exactly 1 node
type: llm
input: $input # Must use $input
# ... node config ...Multi-Stage Workflows:
version: "1.0"
stages:
- name: preprocess # First stage: any name
nodes:
validator: # First node must use $input
type: code
input: $input
# ... config ...
- name: process # Middle stage(s): any name, multiple nodes OK
nodes:
analyze:
type: llm
input: preprocess.validator
# ... config ...
extract:
type: code
input: preprocess.validator
# ... config ...
- name: postprocess # Last stage: any name
nodes:
formatter: # Must have exactly 1 node (exit node)
type: code
input: process.analyze
# ... config ...Rules:
- First stage's first node must use
$inputor$input.fieldas input - Last stage must have exactly 1 node (its output becomes the API response)
- Middle stages can have any number of nodes
- Stage names can be anything (no longer required to be "entry" and "exit")
Using Plugins
Supabase Plugin
If you selected Supabase during project setup, you can use it in code nodes:
import { supabase } from '@code-plugins/supabase.js';
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise<NodeOutput> {
const { data, error } = await supabase
.from('articles')
.select('*')
.limit(10);
if (error) {
throw new Error(`Database error: ${error.message}`);
}
return { type: 'json', value: data };
}Configure in .env:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key # OptionalCreating Custom Plugins
Add files to src/plugins/ and import via @code-plugins/*:
// src/plugins/my-helper.ts
export function formatDate(date: Date): string {
return date.toISOString().split('T')[0];
}
// In any code node:
import { formatDate } from '@code-plugins/my-helper.js';Configuration
Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| PORT | 3000 | Server port |
| LOG_LEVEL | info | Logging level (debug, info, warn, error) |
| LLM_TIMEOUT_MS | 30000 | LLM request timeout |
LLM Provider API Keys
You need at least one provider configured:
| Variable | Provider |
|----------|----------|
| GEMINI_API_KEY | Google Gemini |
| OPENAI_API_KEY | OpenAI |
| ANTHROPIC_API_KEY | Anthropic Claude |
| GROK_API_KEY | xAI Grok |
Plugin-Specific Variables
| Variable | Plugin |
|----------|--------|
| SUPABASE_URL | Supabase |
| SUPABASE_ANON_KEY | Supabase |
| SUPABASE_SERVICE_KEY | Supabase (optional) |
Commands Reference
| Command | Description |
|---------|-------------|
| npm run dev | Start development server with hot reload |
| npm run build | Compile TypeScript to JavaScript |
| npm start | Start production server |
| npm run validate | Validate all workflows |
| npm run create-endpoint | Scaffold a new endpoint interactively |
| npm run scan-deps | Scan and install code node dependencies |
| npm run lint | Run ESLint |
| npm run format | Format code with Prettier |
Troubleshooting
| Error | Solution |
|-------|----------|
| "Provider not found" | Check provider is valid and API key is set in .env |
| "Code node file not found" | Verify file exists in codes/ folder with correct filename |
| "Cannot find module '@workflow/types'" | Run npm run build or restart TypeScript server |
| LLM Timeout | Increase LLM_TIMEOUT_MS in .env or use a faster model |
| "SUPABASE_URL required" | Add Supabase credentials to .env |
Documentation
For more detailed guides, see the docs/ folder:
- Getting Started - Complete setup walkthrough
- Creating Endpoints - Advanced endpoint patterns
- Using Plugins - Plugin configuration and custom plugins
- Configuration Reference - All environment options
License
ISC
