@vettly/express
v0.1.18
Published
Express middleware for content moderation. Platform-ready content moderation in one middleware.
Maintainers
Readme
@vettly/express
Express middleware for UGC moderation. Every request evaluated, every decision recorded with full audit trail.
UGC Moderation Essentials
Apps with user-generated content need four things to stay compliant and keep users safe. This middleware handles all four at the route level:
| Requirement | Express Integration |
|-------------|---------------------|
| Content filtering | moderateContent() middleware |
| User reporting | Re-exported SDK client (POST /v1/reports) |
| User blocking | Re-exported SDK client (POST /v1/blocks) |
| Audit trail | req.moderationResult.decisionId on every request |
import { moderateContent } from '@vettly/express'
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'app-store',
field: 'body.content'
}),
async (req, res) => {
const { moderationResult } = req as any
// moderationResult.decisionId — store for audit trail
res.json({ success: true })
}
)Why Middleware-Level Moderation?
Content moderation at the middleware layer means:
- Consistent enforcement - Every route protected by the same policy
- Fail-open safety - Errors don't block legitimate traffic
- Audit trail access - Decision ID attached to every request
- Graduated responses - Handle block, flag, and warn differently
Installation
npm install @vettly/express @vettly/sdkQuick Start
import express from 'express'
import { moderateContent } from '@vettly/express'
const app = express()
app.use(express.json())
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content'
}),
async (req, res) => {
// Content passed moderation - save with audit trail
const { moderationResult } = req as any
await db.comments.create({
content: req.body.content,
moderationDecisionId: moderationResult.decisionId,
action: moderationResult.action
})
res.json({ success: true })
}
)Middleware Options
import { moderateContent } from '@vettly/express'
app.post('/api/posts',
moderateContent({
// Required
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content', // Path to content in request
// Optional: Custom handlers for each action
onBlock: (req, res, result) => {
// Content violates policy - custom response
res.status(403).json({
error: 'Content blocked',
decisionId: result.decisionId,
categories: result.categories.filter(c => c.triggered)
})
},
onFlag: (req, res, result) => {
// Content flagged for review - still allows through
console.log(`Flagged content: ${result.decisionId}`)
},
onWarn: (req, res, result) => {
// Minor concern - user should be notified
res.setHeader('X-Content-Warning', 'true')
}
}),
yourHandler
)Options Reference
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| apiKey | string | Yes | Your Vettly API key |
| policyId | string | No | Policy ID (default: 'moderate') |
| field | string \| function | Yes | Path to content or async extractor function |
| onBlock | function | No | Custom handler for blocked content |
| onFlag | function | No | Custom handler for flagged content |
| onWarn | function | No | Custom handler for warned content |
Dynamic Field Extraction
For complex request structures, use a function:
app.post('/api/posts',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'social-media',
field: async (req) => {
// Combine multiple fields
const { title, body, tags } = req.body
return `${title}\n\n${body}\n\nTags: ${tags.join(', ')}`
}
}),
yourHandler
)Accessing the Decision
The moderation result is attached to the request:
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content'
}),
async (req, res) => {
const { moderationResult } = req as any
// Available fields
console.log(moderationResult.decisionId) // UUID for audit trail
console.log(moderationResult.action) // 'allow' | 'warn' | 'flag' | 'block'
console.log(moderationResult.safe) // boolean
console.log(moderationResult.flagged) // boolean
console.log(moderationResult.categories) // Array of { category, score, triggered }
console.log(moderationResult.latency) // Response time in ms
// Store decision ID for compliance
await db.posts.create({
content: req.body.content,
userId: req.user.id,
moderationDecisionId: moderationResult.decisionId,
moderationAction: moderationResult.action
})
res.json({ success: true })
}
)Graduated Response Handling
Handle each action type differently:
app.post('/api/messages',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'messaging',
field: 'body.message',
onBlock: (req, res, result) => {
// Hard block - content violates policy
res.status(403).json({
error: 'Message blocked',
reason: 'Content violates community guidelines',
decisionId: result.decisionId
})
},
onFlag: (req, res, result) => {
// Queue for human review but allow message
queueForReview({
decisionId: result.decisionId,
content: req.body.message,
categories: result.categories.filter(c => c.triggered)
})
// Continues to handler
},
onWarn: (req, res, result) => {
// Add warning header but allow
res.setHeader('X-Content-Warning', 'Please be mindful of community guidelines')
// Continues to handler
}
}),
async (req, res) => {
// Message allowed (or was flag/warn)
await sendMessage(req.body.message)
res.json({ sent: true })
}
)Error Handling
The middleware fails open by default (errors allow content through):
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content'
}),
async (req, res) => {
const { moderationResult } = req as any
if (!moderationResult) {
// Moderation failed - log but allow through
console.warn('Moderation unavailable, allowing content')
}
// Your logic
res.json({ success: true })
}
)To fail closed (block on errors), handle it explicitly:
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content'
}),
async (req, res) => {
const { moderationResult } = req as any
if (!moderationResult) {
// Fail closed - reject if moderation unavailable
return res.status(503).json({ error: 'Content moderation unavailable' })
}
res.json({ success: true })
}
)Multiple Content Fields
Moderate multiple fields in the same request:
import { ModerationClient } from '@vettly/express'
const client = new ModerationClient({ apiKey: process.env.VETTLY_API_KEY! })
app.post('/api/profiles', async (req, res) => {
const { displayName, bio, website } = req.body
// Check each field
const [nameResult, bioResult] = await Promise.all([
client.check({ content: displayName, policyId: 'usernames' }),
client.check({ content: bio, policyId: 'bios' })
])
if (nameResult.action === 'block' || bioResult.action === 'block') {
return res.status(403).json({
error: 'Profile content blocked',
decisions: {
displayName: nameResult.decisionId,
bio: bioResult.decisionId
}
})
}
// Save profile with decision IDs
await db.profiles.create({
...req.body,
moderationDecisions: {
displayName: nameResult.decisionId,
bio: bioResult.decisionId
}
})
res.json({ success: true })
})Protecting Multiple Routes
Apply to all routes matching a pattern:
// Moderate all /api/ugc/* routes
app.use('/api/ugc',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'user-content',
field: (req) => req.body.content || req.body.text || ''
})
)
// Individual routes inherit moderation
app.post('/api/ugc/comments', saveComment)
app.post('/api/ugc/reviews', saveReview)
app.post('/api/ugc/posts', savePost)TypeScript Support
Full TypeScript support with typed request:
import { Request, Response, NextFunction } from 'express'
import { moderateContent } from '@vettly/express'
import type { CheckResponse } from '@vettly/sdk'
interface ModeratedRequest extends Request {
moderationResult?: CheckResponse
}
app.post('/api/comments',
moderateContent({
apiKey: process.env.VETTLY_API_KEY!,
policyId: 'community-safe',
field: 'body.content'
}),
async (req: ModeratedRequest, res: Response) => {
const { moderationResult } = req
if (moderationResult) {
console.log(`Decision: ${moderationResult.decisionId}`)
}
res.json({ success: true })
}
)Re-exported SDK
The SDK client is re-exported for convenience:
import { ModerationClient, moderateContent } from '@vettly/express'
// Use middleware for route protection
app.post('/comments', moderateContent({ ... }), handler)
// Use client directly for complex flows
const client = new ModerationClient({ apiKey: '...' })
const result = await client.check({ content, policyId })Get Your API Key
- Sign up at vettly.dev
- Go to Dashboard > API Keys
- Create and copy your key
Links
- vettly.dev - Sign up
- docs.vettly.dev - Documentation
- @vettly/sdk - Core SDK
