@workwayco/sdk
v1.1.0
Published
WORKWAY SDK for building workflow integrations
Maintainers
Readme
@workway/sdk
The official SDK for building WORKWAY workflows.
Installation
npm install @workway/sdkQuick Start
Integration Workflow (No AI - Most Common)
import { defineWorkflow, webhook } from '@workway/sdk'
// Simple, profitable, high-margin integration
export default defineWorkflow({
name: 'Stripe to Notion Invoice Tracker',
type: 'integration', // ← Runs on Cloudflare, costs ~$0.001/execution
pricing: { model: 'subscription', price: 5, executions: 'unlimited' },
integrations: [
{ service: 'stripe', scopes: ['read_payments'] },
{ service: 'notion', scopes: ['write_pages'] }
],
inputs: {
notionDatabaseId: { type: 'notion_database_picker', required: true }
},
trigger: webhook({ service: 'stripe', event: 'payment_intent.succeeded' }),
async execute({ trigger, inputs, integrations }) {
const payment = trigger.data.object
// Pure business logic - no AI costs
await integrations.notion.pages.create({
parent: { database_id: inputs.notionDatabaseId },
properties: {
'Amount': { number: payment.amount / 100 },
'Customer': { email: payment.receipt_email },
'Date': { date: { start: new Date().toISOString() } }
}
})
return { success: true }
}
})Cost: $0.001/execution | Margin: 98% | Your earnings: $3.50/mo per user
AI-Enhanced Workflow (Cloudflare Workers AI)
import { defineWorkflow, webhook } from '@workway/sdk'
import { createAIClient, AIModels } from '@workway/sdk/workers-ai'
// Integration + Cloudflare Workers AI for categorization
export default defineWorkflow({
name: 'Smart Support Ticket Router',
type: 'ai-enhanced', // ← Uses Cloudflare Workers AI
pricing: { model: 'subscription', price: 15, executions: 200 },
integrations: [
{ service: 'zendesk', scopes: ['read_tickets', 'update_tickets'] },
{ service: 'slack', scopes: ['send_messages'] }
],
inputs: {
teams: {
type: 'array',
items: { name: 'string', slackChannel: 'string' }
}
},
trigger: webhook({ service: 'zendesk', event: 'ticket.created' }),
async execute({ trigger, inputs, integrations, env }) {
const ticket = trigger.data.ticket
const ai = createAIClient(env)
// Cloudflare Workers AI - fast, cheap, no API keys
const result = await ai.generateText({
model: AIModels.LLAMA_2_7B, // Fastest model for classification
system: `Classify ticket into: ${inputs.teams.map(t => t.name).join(', ')}. Reply with only the category name.`,
prompt: `${ticket.subject}\n${ticket.description}`,
max_tokens: 20
})
// Rest is pure business logic
const category = result.data.trim()
const team = inputs.teams.find(t => t.name === category) || inputs.teams[0]
await integrations.zendesk.tickets.update(ticket.id, {
tags: [team.name.toLowerCase()]
})
await integrations.slack.chat.postMessage({
channel: team.slackChannel,
text: `New ${team.name} ticket: ${ticket.subject}`
})
return { success: true, category: team.name }
}
})Cost: $0.003/execution | Margin: 95% | Your earnings: $10.50/mo per user
AI-Native Workflow (Multi-Step AI)
import { defineWorkflow, schedule, http } from '@workway/sdk'
import { createAIClient, AIModels } from '@workway/sdk/workers-ai'
// Multiple AI steps using Cloudflare Workers AI
export default defineWorkflow({
name: 'Daily AI Newsletter',
type: 'ai-native', // ← Multiple AI operations
pricing: { model: 'subscription', price: 25, executions: 30 },
inputs: {
topics: { type: 'array', default: ['AI', 'startups'] },
email: { type: 'email', required: true }
},
trigger: schedule('0 8 * * *'),
async execute({ inputs, cache, env }) {
const ai = createAIClient(env)
// Step 1: Research (caching helps here)
const cacheKey = `research_${inputs.topics.join('_')}_${new Date().toDateString()}`
let research = await cache.get(cacheKey)
if (!research) {
const result = await ai.generateText({
model: AIModels.LLAMA_3_8B, // Best quality/cost balance
prompt: `Research and summarize the latest developments in: ${inputs.topics.join(', ')}`,
max_tokens: 2000
})
research = result.data
await cache.set(cacheKey, research, { ttl: 3600 }) // 1 hour
}
// Step 2: Generate newsletter
const newsletterResult = await ai.generateText({
model: AIModels.MISTRAL_7B, // Strong writing quality
system: 'You are a professional newsletter writer. Create engaging, informative content.',
prompt: `Write a newsletter based on:\n\n${research}`,
max_tokens: 1500
})
// Step 3: Send email (no AI cost)
await http.post('https://api.sendgrid.com/v3/mail/send', {
headers: { 'Authorization': `Bearer ${process.env.SENDGRID_KEY}` },
body: {
to: inputs.email,
subject: `Your Daily ${inputs.topics[0]} Update`,
html: newsletterResult.data
}
})
return { success: true }
}
})Cost: $0.03/execution (with caching) | Margin: 96% | Your earnings: $17.20/mo per user
Workflow Types
The SDK automatically optimizes based on workflow type:
type: 'integration' (70% of marketplace)
- Runtime: Cloudflare Workers
- Cost: $0.001-0.01/execution
- Best for: API-to-API workflows, data sync, webhooks
- Examples: Stripe→Notion, Airtable→Calendar, GitHub→Slack
type: 'ai-enhanced' (20% of marketplace)
- Runtime: Cloudflare Workers + Workers AI
- Cost: $0.003-0.02/execution
- Best for: Workflows with 1-2 AI calls for categorization/analysis
- Examples: Email categorization, sentiment analysis, smart routing
type: 'ai-native' (10% of marketplace)
- Runtime: Cloudflare Workers AI + caching
- Cost: $0.02-0.10/execution
- Best for: Content generation, research, multi-step AI pipelines
- Examples: Newsletter generation, research assistants, content creators
Note: WORKWAY is Cloudflare-native. All AI features use Cloudflare Workers AI — no external API keys required.
Core Concepts
1. Integrations
Pre-built connectors for popular services:
integrations: [
{
service: 'stripe',
scopes: ['read_payments', 'read_customers'],
description: 'Read payment and customer data'
},
{
service: 'notion',
scopes: ['write_pages', 'read_databases'],
description: 'Create and read Notion pages'
}
]Available integrations:
- Payment: Stripe, PayPal, Square
- Productivity: Notion, Airtable, Google Workspace
- Communication: Slack, Discord, Telegram
- CRM: HubSpot, Salesforce, Pipedrive
- Email: SendGrid, Resend, Mailchimp
- Development: GitHub, GitLab, Linear
2. Triggers
How workflows are executed:
// Webhook trigger (instant)
trigger: webhook({
service: 'stripe',
event: 'payment_intent.succeeded'
})
// Schedule trigger (cron)
trigger: schedule('0 8 * * *') // Daily at 8am
// Manual trigger (user clicks button)
trigger: manual()
// Polling trigger (check for new data)
trigger: poll({
service: 'airtable',
interval: '15m', // Every 15 minutes
query: 'filterByFormula=...'
})3. Inputs (User Configuration)
inputs: {
// Text input
projectName: {
type: 'text',
label: 'Project Name',
required: true,
placeholder: 'My Project'
},
// Number input
maxItems: {
type: 'number',
label: 'Maximum Items',
min: 1,
max: 100,
default: 10
},
// Select dropdown
priority: {
type: 'select',
label: 'Priority Level',
options: ['low', 'medium', 'high'],
default: 'medium'
},
// Boolean checkbox
sendNotifications: {
type: 'boolean',
label: 'Send Notifications',
default: true
},
// Array input
tags: {
type: 'array',
label: 'Tags',
itemType: 'text',
default: []
},
// Service-specific pickers
notionDatabase: {
type: 'notion_database_picker',
label: 'Notion Database',
required: true
},
slackChannel: {
type: 'slack_channel_picker',
label: 'Slack Channel',
required: true
}
}4. Pricing Models
// Subscription (most common)
pricing: {
model: 'subscription',
price: 10,
executions: 100 // per month
}
// Usage-based
pricing: {
model: 'usage',
pricePerExecution: 0.10,
minPrice: 5 // Minimum monthly charge
}
// One-time
pricing: {
model: 'one-time',
price: 29
}
// Tiered pricing
pricing: {
model: 'subscription',
tiers: [
{ name: 'starter', price: 10, executions: 50 },
{ name: 'pro', price: 25, executions: 200 },
{ name: 'enterprise', price: 99, executions: 1000 }
]
}SDK APIs
Workers AI Module (Cloudflare-Native)
For workflows running on Cloudflare Workers, use the native Workers AI integration — no API keys required.
import { createAIClient, AIModels } from '@workway/sdk/workers-ai'
export default defineWorkflow({
name: 'AI Email Processor',
type: 'ai-native',
async execute({ env }) {
// Create AI client from Cloudflare binding
const ai = createAIClient(env)
// Text generation
const result = await ai.generateText({
prompt: 'Summarize this email...',
model: AIModels.LLAMA_3_8B, // $0.01/1M tokens
max_tokens: 512,
temperature: 0.7
})
return { summary: result.data }
}
})Available Models
| Model | Alias | Cost/1M | Best For |
|-------|-------|---------|----------|
| Llama 2 7B | AIModels.LLAMA_2_7B | $0.005 | Fast, simple tasks |
| Llama 3 8B | AIModels.LLAMA_3_8B | $0.01 | Balanced (default) |
| Mistral 7B | AIModels.MISTRAL_7B | $0.02 | Complex reasoning |
| BGE Small | AIModels.BGE_SMALL | $0.001 | Fast embeddings |
| BGE Base | AIModels.BGE_BASE | $0.002 | Quality embeddings |
| Stable Diffusion XL | AIModels.STABLE_DIFFUSION_XL | $0.02/img | Image generation |
| Whisper | AIModels.WHISPER | $0.006/min | Speech-to-text |
Workers AI Methods
const ai = createAIClient(env)
// Text generation
const text = await ai.generateText({
prompt: string,
model?: string, // Default: LLAMA_3_8B
temperature?: number, // Default: 0.7
max_tokens?: number, // Default: 1024
system?: string, // System prompt
cache?: boolean // Enable caching
})
// Embeddings (for semantic search)
const embeddings = await ai.generateEmbeddings({
text: string | string[],
model?: string // Default: BGE_BASE
})
// Image generation
const image = await ai.generateImage({
prompt: string,
model?: string, // Default: STABLE_DIFFUSION_XL
negative_prompt?: string,
width?: number, // Default: 1024
height?: number // Default: 1024
})
// Speech-to-text
const transcript = await ai.transcribeAudio({
audio: ArrayBuffer,
model?: string, // Default: WHISPER
language?: string
})
// Translation (100+ languages)
const translated = await ai.translateText({
text: string,
source: string, // e.g., 'en'
target: string // e.g., 'es'
})
// Sentiment analysis
const sentiment = await ai.analyzeSentiment({
text: string
})
// Returns: { sentiment: 'POSITIVE' | 'NEGATIVE', confidence: number }
// Streaming responses
for await (const chunk of ai.streamText({ prompt: '...' })) {
console.log(chunk)
}
// Chain multiple operations
const results = await ai.chain([
{ type: 'text', options: { prompt: 'Analyze...' } },
{ type: 'sentiment', options: { usePrevious: true } },
{ type: 'translate', options: { source: 'en', target: 'es', usePrevious: true } }
])Workers AI vs External Providers
| Feature | Workers AI | OpenAI/Anthropic | |---------|-----------|------------------| | API Keys | None required | Required | | Cost | $0.01/1M tokens | $0.15-3/1M tokens | | Latency | Edge (fast) | Variable | | Data | Stays in Cloudflare | Leaves network | | Models | Open source (Llama, Mistral) | Proprietary |
Why Workers AI only:
- No API keys: Works instantly with Cloudflare binding
- Data privacy: Data stays within Cloudflare network
- Cost efficiency: 10-100x cheaper than external providers
- Edge latency: Runs close to users globally
- Open models: Uses Llama, Mistral, and other open-source models
WORKWAY is opinionated: We use Cloudflare infrastructure exclusively. This keeps costs low and simplifies deployment. For complex reasoning tasks, Llama 3 and Mistral 7B are highly capable.
Vectorize Module (Semantic Search & RAG)
Build semantic search, knowledge bases, and RAG systems with Cloudflare Vectorize.
import { createVectorClient } from '@workway/sdk/vectorize'
export default defineWorkflow({
name: 'Knowledge Base Search',
type: 'ai-native',
async execute({ env, trigger }) {
// Create Vectorize client with AI binding for auto-embeddings
const vectors = createVectorClient(env)
// Store text with automatic embedding generation
await vectors.storeText({
id: 'doc-1',
text: 'Cloudflare Workers run JavaScript at the edge...',
metadata: { source: 'docs', category: 'platform' }
})
// Semantic text search
const results = await vectors.searchText({
query: 'How do edge functions work?',
topK: 5
})
return { matches: results.data.matches }
}
})RAG (Retrieval Augmented Generation)
const vectors = createVectorClient(env)
// Build knowledge base from documents
await vectors.buildKnowledgeBase({
documents: [
{ id: 'guide-1', content: 'Workflow SDK guide...', metadata: { type: 'guide' } },
{ id: 'api-ref', content: 'API reference...', metadata: { type: 'reference' } }
],
chunkSize: 500, // Words per chunk
overlap: 50 // Overlap between chunks
})
// RAG query - searches, retrieves context, generates answer
const answer = await vectors.rag({
query: 'How do I handle OAuth tokens?',
topK: 5,
generationModel: AIModels.LLAMA_3_8B,
temperature: 0.7
})
console.log(answer.data)
// {
// answer: 'To handle OAuth tokens in WORKWAY...',
// sources: [{ id: 'guide-1_chunk_3', score: 0.89, text: '...' }],
// query: 'How do I handle OAuth tokens?'
// }Vectorize Methods
const vectors = createVectorClient(env)
// Store raw vectors
await vectors.upsert({
id: 'item-1',
values: [0.1, 0.2, 0.3, ...], // Embedding vector
metadata: { type: 'product' }
})
// Batch upsert
await vectors.upsertBatch([
{ id: 'item-1', values: [...], metadata: {} },
{ id: 'item-2', values: [...], metadata: {} }
])
// Query with vector
const results = await vectors.query({
vector: [0.1, 0.2, ...],
topK: 10,
filter: { category: 'electronics' }
})
// Text-based operations (auto-generates embeddings)
await vectors.storeText({ id, text, metadata })
await vectors.searchText({ query, topK, filter })
// Build knowledge base
await vectors.buildKnowledgeBase({ documents, chunkSize, overlap })
// RAG query
await vectors.rag({ query, topK, generationModel, systemPrompt })
// Recommendations (collaborative filtering)
await vectors.recommend({
userId: 'user-1',
itemIds: ['item-viewed-1', 'item-purchased-2'],
topK: 10
})
// Delete vectors
await vectors.delete(['id-1', 'id-2'])Vectorize Configuration
Add Vectorize binding to wrangler.toml:
[[vectorize]]
binding = "VECTORDB"
index_name = "my-index"
[ai]
binding = "AI" # Required for auto-embeddingsHTTP Module
import { http } from '@workway/sdk'
// GET request
const { data } = await http.get('https://api.example.com/users', {
headers: { 'Authorization': 'Bearer token' },
query: { limit: '10' }
})
// POST request with retry
const result = await http.post('https://api.example.com/data', {
body: { name: 'test' },
retry: { attempts: 3, backoff: 'exponential' }
})
// Other methods
await http.put(url, options)
await http.patch(url, options)
await http.delete(url, options)Cache Module
import { createCache } from '@workway/sdk'
// Create cache from KV binding
const cache = createCache(env.CACHE)
// Set with TTL (seconds)
await cache.set('user:123', userData, { ttl: 3600 })
// Get
const user = await cache.get('user:123')
// Get or set pattern
const data = await cache.getOrSet('key', async () => {
return await expensiveOperation()
}, { ttl: 300 })
// Invalidate by tags
await cache.set('post:1', post, { tags: ['user:123', 'posts'] })
await cache.invalidateByTags(['user:123']) // Clears all tagged entriesStorage Module
import { createKVStorage, createObjectStorage } from '@workway/sdk'
// Key-Value storage (Cloudflare KV)
const kv = createKVStorage(env.MY_KV)
await kv.set('config', { theme: 'dark' })
const config = await kv.get('config')
const keys = await kv.list({ prefix: 'user:' })
// Object storage (Cloudflare R2)
const storage = createObjectStorage(env.MY_BUCKET)
await storage.uploadFile('docs/report.pdf', pdfBuffer, {
contentType: 'application/pdf'
})
const file = await storage.downloadFile('docs/report.pdf')
const metadata = await storage.getMetadata('docs/report.pdf')Transform Module
import { transform } from '@workway/sdk'
// Array operations
const grouped = transform.groupBy(users, u => u.department)
const unique = transform.unique(items, i => i.id)
const chunks = transform.chunk(items, 10)
const sorted = transform.sortBy(items, i => i.date, 'desc')
// Data parsing
const data = transform.parseJSON(text, fallback)
const rows = transform.parseCSV(csvText)
const csv = transform.toCSV(data)
// Date formatting
const formatted = transform.formatDate(date, 'YYYY-MM-DD HH:mm')
const relative = transform.relativeTime(date) // "2 hours ago"
// String operations
const slug = transform.slugify('Hello World') // "hello-world"
const short = transform.truncate(text, 100)
const filled = transform.template('Hello {{name}}!', { name: 'World' })Execution Context
Every workflow receives a context object:
interface ExecutionContext {
// User inputs
inputs: Record<string, any>
// Trigger data
trigger: {
type: 'webhook' | 'schedule' | 'manual' | 'poll'
data?: any
}
// Connected integrations
integrations: {
[service: string]: ServiceClient
}
// User secrets (encrypted)
secrets: Record<string, string>
// Execution metadata
executionId: string
userId: string
workflowId: string
// State management
state: {
get(key: string): Promise<any>
set(key: string, value: any): Promise<void>
delete(key: string): Promise<void>
}
// Caching
cache: {
get(key: string): Promise<any>
set(key: string, value: any, options?: CacheOptions): Promise<void>
delete(key: string): Promise<void>
}
// Logging
log: {
info(message: string, data?: any): void
warn(message: string, data?: any): void
error(message: string, data?: any): void
}
}Examples
Example 1: Google Forms → Notion Database
export default defineWorkflow({
name: 'Google Forms to Notion',
type: 'integration',
pricing: { model: 'subscription', price: 5, executions: 'unlimited' },
integrations: [
{ service: 'google_forms', scopes: ['read_responses'] },
{ service: 'notion', scopes: ['write_pages'] }
],
inputs: {
formId: { type: 'google_form_picker', required: true },
notionDb: { type: 'notion_database_picker', required: true }
},
trigger: webhook({ service: 'google_forms', event: 'form_response' }),
async execute({ trigger, inputs, integrations }) {
const response = trigger.data
await integrations.notion.pages.create({
parent: { database_id: inputs.notionDb },
properties: {
'Name': { title: [{ text: { content: response.answers.name } }] },
'Email': { email: response.answers.email },
'Submitted': { date: { start: new Date().toISOString() } }
}
})
return { success: true }
}
})Example 2: Meeting Transcription → AI Summary → Notion
export default defineWorkflow({
name: 'Meeting Summary Generator',
type: 'ai-enhanced',
pricing: { model: 'subscription', price: 20, executions: 50 },
integrations: [
{ service: 'google_meet', scopes: ['read_recordings'] },
{ service: 'notion', scopes: ['write_pages'] }
],
inputs: {
notionDb: { type: 'notion_database_picker', required: true },
summaryLength: {
type: 'select',
options: ['brief', 'detailed'],
default: 'brief'
}
},
trigger: webhook({ service: 'google_meet', event: 'recording_ready' }),
async execute({ trigger, inputs, integrations, ai }) {
const recording = trigger.data
// Get transcript
const transcript = await integrations.google_meet.getTranscript(recording.id)
// AI summarization using Workers AI
const summary = await ai.completion({
model: AIModels.LLAMA_3_8B, // High-quality model
system: `Summarize meeting transcript. Format: ${inputs.summaryLength}`,
prompt: transcript,
max_tokens: inputs.summaryLength === 'brief' ? 500 : 1500,
cache: false // Don't cache - each meeting is unique
})
// Extract action items with AI
const actionItems = await ai.completion({
model: AIModels.LLAMA_2_7B, // Smaller, faster model for simple task
system: 'Extract action items from meeting summary. Return JSON array.',
prompt: summary.data,
max_tokens: 300
})
// Save to Notion
await integrations.notion.pages.create({
parent: { database_id: inputs.notionDb },
properties: {
'Title': { title: [{ text: { content: recording.title } }] },
'Date': { date: { start: recording.date } },
'Summary': { rich_text: [{ text: { content: summary } }] },
'Action Items': { rich_text: [{ text: { content: actionItems } }] }
}
})
return { success: true, summary, actionItems: JSON.parse(actionItems) }
}
})Testing
// test/workflow.test.ts
import { test, expect } from '@workway/testing'
import workflow from '../workflow'
test('processes payment correctly', async () => {
const result = await workflow.execute({
trigger: {
type: 'webhook',
data: {
object: {
amount: 1000,
currency: 'usd',
receipt_email: '[email protected]'
}
}
},
inputs: {
notionDatabaseId: 'test-db-id'
},
integrations: {
notion: mockNotionClient
}
})
expect(result.success).toBe(true)
})CLI Commands
# Create new workflow
npx create-workway-workflow
# Test locally
npm run test
# Get cost estimate
npx workway estimate
# Deploy to WORKWAY
npx workway publish
# View logs
npx workway logs <workflow-id>
# Update workflow
npx workway updateCost Optimization
WORKWAY uses Cloudflare Workers AI exclusively for maximum cost efficiency:
1. Use the Right Model
import { createAIClient, AIModels } from '@workway/sdk/workers-ai'
const ai = createAIClient(env)
// Fast & cheap for simple tasks
await ai.generateText({
model: AIModels.LLAMA_2_7B, // $0.01/1M tokens
prompt: 'Classify this text...'
})
// More capable for complex reasoning
await ai.generateText({
model: AIModels.LLAMA_3_8B, // $0.01/1M tokens
prompt: 'Analyze this document...'
})2. Avoid AI When Possible
// Use 'integration' type for non-AI workflows
type: 'integration' // ~$0.001/execution
// Only use AI when needed
type: 'ai-enhanced' // ~$0.01-0.10/execution depending on tokens3. Minimize Token Usage
// Be concise in prompts
await ai.generateText({
model: AIModels.LLAMA_2_7B,
system: 'Reply with only: positive, negative, or neutral',
prompt: reviewText,
max_tokens: 10 // Limit output
})4. Cache Results
// Cache expensive AI operations at application level
const cacheKey = `sentiment:${hash(text)}`
const cached = await env.KV.get(cacheKey)
if (cached) return JSON.parse(cached)
const result = await ai.generateText({ ... })
await env.KV.put(cacheKey, JSON.stringify(result), { expirationTtl: 3600 })Best Practices
✅ DO
// Use integration type for API workflows
type: 'integration' // Runs cheap on Cloudflare
// Cache expensive AI operations
const cached = await cache.get(key)
if (!cached) {
cached = await ai.completion(...)
await cache.set(key, cached, { ttl: 3600 })
}
// Use smaller model for simple tasks
await ai.completion({
model: AIModels.LLAMA_2_7B, // Faster, cheaper than LLAMA_3_8B
prompt: 'Classify this as A, B, or C'
})
// Parallel execution when possible
const [research, data] = await Promise.all([
ai.completion(...),
http.get(...)
])❌ DON'T
// Don't use AI for simple logic
await ai.completion({ prompt: 'Is 10 > 5?' }) // Use if/else!
// Don't make sequential calls when parallel is possible
const a = await ai.completion(...)
const b = await ai.completion(...) // Should be Promise.all
// Don't use large models for simple tasks
await ai.completion({
model: AIModels.LLAMA_3_8B, // Overkill for yes/no
prompt: 'Say yes or no'
}) // Use LLAMA_2_7B or PHI_2 instead
// Don't forget to cache
await ai.completion({ prompt: 'same prompt every time' }) // Cache this!License
MIT
