create-web-ai-service
v1.0.7
Published
CLI scaffolder for creating new web-ai-service projects
Downloads
45
Maintainers
Readme
Web AI Service
A TypeScript-based workflow engine that creates dynamic API endpoints from YAML workflow definitions. Build powerful AI-powered APIs with LLM calls, custom code execution, and data transformations—all without writing server boilerplate.
Features
- 🚀 YAML-Based Configuration - Define API endpoints declaratively
- 🤖 Multi-LLM Support - Built-in support for Gemini, OpenAI, Anthropic, and Grok
- 📝 Custom Code Nodes - Execute TypeScript functions in your workflows
- ⚡ Parallel Execution - Run multiple nodes concurrently with error strategies
- 🔄 Data Transformation - Reduce, split, and map data with JSONPath
- ✅ Input Validation - JSON Schema validation on request inputs
- 🎯 Type-Safe - Full TypeScript support with strict typing
- 🌐 Auto-Routing - Endpoint folders automatically become API routes
- 🔌 Plugin System - Extensible with Supabase and custom plugins
Table of Contents
- Quick Start
- Project Structure
- Creating Endpoints
- Node Types
- Using Plugins
- Configuration
- Commands Reference
- Troubleshooting
- Documentation
Quick Start
Option 1: Create a New Project (Recommended)
The easiest way to start is with the scaffolder:
npx create-web-ai-serviceYou'll be prompted to:
- Enter your project name - e.g.,
my-api - Select plugins - Choose from available plugins like Supabase
Or use command-line arguments for non-interactive setup:
npx create-web-ai-service my-api --plugins supabaseAfter scaffolding:
cd my-api
cp .env.example .env # Configure your API keys
npm run dev # Start the serverYour API is now running at http://localhost:3000!
Option 2: Install as a Dependency
Add to an existing project:
npm install web-ai-serviceOption 3: Global Installation
npm install -g web-ai-service
web-ai-service # Run from any directory with a src/endpoints folderProject Structure
When you create a new project, you'll get this structure:
my-api/
├── src/
│ ├── endpoints/ # Your API endpoints
│ │ └── hello/ # Example: GET /hello
│ │ ├── GET.yaml # Workflow definition
│ │ ├── codes/ # TypeScript code nodes
│ │ │ └── format-greeting.ts
│ │ └── prompts/ # LLM system prompts
│ │ └── greeting-system.txt
│ │
│ └── plugins/ # Shared code modules
│ └── supabase.ts # (if selected during setup)
│
├── .env # Your API keys (gitignored)
├── .env.example # Template for environment variables
├── package.json
└── tsconfig.jsonKey Concepts
| Concept | Description |
|---------|-------------|
| Endpoint | A folder in src/endpoints/ that becomes an API route |
| Workflow | A YAML file (e.g., POST.yaml, GET.yaml) defining the processing pipeline |
| Stage | A sequential step in the workflow containing one or more nodes |
| Node | An individual processing unit (LLM call, code execution, etc.) |
How Routing Works
| Folder Path | HTTP Method | API Route |
|-------------|-------------|-----------|
| src/endpoints/hello/GET.yaml | GET | /hello |
| src/endpoints/summarize/POST.yaml | POST | /summarize |
| src/endpoints/users/profile/GET.yaml | GET | /users/profile |
Creating Endpoints
Basic Example: Text Summarization
Create a POST endpoint at /summarize:
1. Create the folder structure:
mkdir -p src/endpoints/summarize/{codes,prompts}2. Create the system prompt (src/endpoints/summarize/prompts/system.txt):
You are a concise summarization assistant. Summarize the provided text clearly in 2-3 paragraphs.3. Create the workflow (src/endpoints/summarize/POST.yaml):
version: "1.0"
stages:
- name: main
nodes:
summarize:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
temperature: 0.3
maxTokens: 1024
systemMessages:
- file: system.txt4. Test it:
curl -X POST http://localhost:3000/summarize \
-H "Content-Type: application/json" \
-d '{"text": "Long text to summarize..."}'Adding Input Validation with Code Nodes
Create a code node to validate inputs before processing:
src/endpoints/summarize/codes/validate.ts:
import type { NodeOutput } from '@workflow/types';
interface SummarizeInput {
text?: string;
}
export default async function(input: unknown): Promise<NodeOutput> {
const body = input as SummarizeInput;
if (!body.text || typeof body.text !== 'string') {
throw new Error('Missing required field: text');
}
if (body.text.length < 10) {
throw new Error('Text must be at least 10 characters');
}
return { type: 'string', value: body.text };
}Updated workflow with validation stage:
version: "1.0"
stages:
- name: validate
nodes:
check_input:
type: code
input: $input
file: validate.ts
- name: summarize
nodes:
summary:
type: llm
input: validate.check_input # Reference previous node output
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: system.txtMulti-Stage Workflow Example
Chain multiple processing stages:
version: "1.0"
stages:
- name: extract
nodes:
parse_data:
type: code
input: $input
file: extract-data.ts
- name: analyze
nodes:
analyze_content:
type: llm
input: extract.parse_data
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: analyzer-prompt.txt
- name: format
nodes:
format_response:
type: code
input: analyze.analyze_content
file: format-output.tsParallel Node Execution
Run multiple LLM calls simultaneously within a stage:
stages:
- name: parallel_analysis
nodes:
sentiment:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: sentiment-prompt.txt
keywords:
type: llm
input: $input.text
provider: openai
model: gpt-4o-mini
systemMessages:
- file: keywords-prompt.txt
- name: combine
nodes:
merge:
type: reduce
inputs:
- parallel_analysis.sentiment
- parallel_analysis.keywords
mapping:
sentiment: $.0
keywords: $.1Node Types
LLM Node
Call an LLM provider:
my_llm_node:
type: llm
input: $input.text # or reference: stageName.nodeName
provider: gemini # gemini | openai | anthropic | grok
model: gemini-2.0-flash-lite
temperature: 0.7 # Optional (0.0-1.0)
maxTokens: 1024 # Optional
systemMessages:
- file: prompt.txt
cache: true # Cache for performanceSupported Providers & Models:
| Provider | Example Models |
|----------|----------------|
| gemini | gemini-2.0-flash-lite, gemini-2.0-flash, gemini-1.5-pro |
| openai | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| anthropic | claude-3-5-sonnet-latest, claude-3-haiku-20240307 |
| grok | grok-2, grok-2-mini |
Code Node
Execute custom TypeScript:
my_code_node:
type: code
input: $input
file: my-processor.tsThe TypeScript file must export a default async function:
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise<NodeOutput> {
// Your logic here
return {
type: 'json', // 'string' | 'json' | 'number' | 'boolean' | 'array'
value: { processed: true }
};
}Reduce Node
Combine multiple node outputs:
merge_results:
type: reduce
inputs:
- stageName.node1
- stageName.node2
mapping:
firstResult: $.0
secondResult: $.1Split Node
Divide output into named parts:
split_data:
type: split
input: stageName.nodeName
mapping:
header: $.header
body: $.content
footer: $.footerPassthrough Node
Pass input directly to output:
forward:
type: passthrough
input: $inputUsing Plugins
Supabase Plugin
If you selected Supabase during project setup, you can use it in code nodes:
import { supabase } from '@code-plugins/supabase.js';
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise<NodeOutput> {
const { data, error } = await supabase
.from('articles')
.select('*')
.limit(10);
if (error) {
throw new Error(`Database error: ${error.message}`);
}
return { type: 'json', value: data };
}Configure in .env:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key # OptionalCreating Custom Plugins
Add files to src/plugins/ and import via @code-plugins/*:
// src/plugins/my-helper.ts
export function formatDate(date: Date): string {
return date.toISOString().split('T')[0];
}
// In any code node:
import { formatDate } from '@code-plugins/my-helper.js';Configuration
Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| PORT | 3000 | Server port |
| LOG_LEVEL | info | Logging level (debug, info, warn, error) |
| LLM_TIMEOUT_MS | 30000 | LLM request timeout |
LLM Provider API Keys
You need at least one provider configured:
| Variable | Provider |
|----------|----------|
| GEMINI_API_KEY | Google Gemini |
| OPENAI_API_KEY | OpenAI |
| ANTHROPIC_API_KEY | Anthropic Claude |
| GROK_API_KEY | xAI Grok |
Plugin-Specific Variables
| Variable | Plugin |
|----------|--------|
| SUPABASE_URL | Supabase |
| SUPABASE_ANON_KEY | Supabase |
| SUPABASE_SERVICE_KEY | Supabase (optional) |
Commands Reference
| Command | Description |
|---------|-------------|
| npm run dev | Start development server with hot reload |
| npm run build | Compile TypeScript to JavaScript |
| npm start | Start production server |
| npm run validate | Validate all workflows |
| npm run create-endpoint | Scaffold a new endpoint interactively |
| npm run scan-deps | Scan and install code node dependencies |
| npm run lint | Run ESLint |
| npm run format | Format code with Prettier |
Troubleshooting
| Error | Solution |
|-------|----------|
| "Provider not found" | Check provider is valid and API key is set in .env |
| "Code node file not found" | Verify file exists in codes/ folder with correct filename |
| "Cannot find module '@workflow/types'" | Run npm run build or restart TypeScript server |
| LLM Timeout | Increase LLM_TIMEOUT_MS in .env or use a faster model |
| "SUPABASE_URL required" | Add Supabase credentials to .env |
Documentation
For more detailed guides, see the docs/ folder:
- Getting Started - Complete setup walkthrough
- Creating Endpoints - Advanced endpoint patterns
- Using Plugins - Plugin configuration and custom plugins
- Configuration Reference - All environment options
License
ISC
