npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@vettly/express

v0.1.18

Published

Express middleware for content moderation. Platform-ready content moderation in one middleware.

Readme

@vettly/express

Express middleware for UGC moderation. Every request evaluated, every decision recorded with full audit trail.

UGC Moderation Essentials

Apps with user-generated content need four things to stay compliant and keep users safe. This middleware handles all four at the route level:

| Requirement | Express Integration | |-------------|---------------------| | Content filtering | moderateContent() middleware | | User reporting | Re-exported SDK client (POST /v1/reports) | | User blocking | Re-exported SDK client (POST /v1/blocks) | | Audit trail | req.moderationResult.decisionId on every request |

import { moderateContent } from '@vettly/express'

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'app-store',
    field: 'body.content'
  }),
  async (req, res) => {
    const { moderationResult } = req as any
    // moderationResult.decisionId — store for audit trail
    res.json({ success: true })
  }
)

Why Middleware-Level Moderation?

Content moderation at the middleware layer means:

  • Consistent enforcement - Every route protected by the same policy
  • Fail-open safety - Errors don't block legitimate traffic
  • Audit trail access - Decision ID attached to every request
  • Graduated responses - Handle block, flag, and warn differently

Installation

npm install @vettly/express @vettly/sdk

Quick Start

import express from 'express'
import { moderateContent } from '@vettly/express'

const app = express()
app.use(express.json())

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content'
  }),
  async (req, res) => {
    // Content passed moderation - save with audit trail
    const { moderationResult } = req as any

    await db.comments.create({
      content: req.body.content,
      moderationDecisionId: moderationResult.decisionId,
      action: moderationResult.action
    })

    res.json({ success: true })
  }
)

Middleware Options

import { moderateContent } from '@vettly/express'

app.post('/api/posts',
  moderateContent({
    // Required
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content',  // Path to content in request

    // Optional: Custom handlers for each action
    onBlock: (req, res, result) => {
      // Content violates policy - custom response
      res.status(403).json({
        error: 'Content blocked',
        decisionId: result.decisionId,
        categories: result.categories.filter(c => c.triggered)
      })
    },

    onFlag: (req, res, result) => {
      // Content flagged for review - still allows through
      console.log(`Flagged content: ${result.decisionId}`)
    },

    onWarn: (req, res, result) => {
      // Minor concern - user should be notified
      res.setHeader('X-Content-Warning', 'true')
    }
  }),
  yourHandler
)

Options Reference

| Option | Type | Required | Description | |--------|------|----------|-------------| | apiKey | string | Yes | Your Vettly API key | | policyId | string | No | Policy ID (default: 'moderate') | | field | string \| function | Yes | Path to content or async extractor function | | onBlock | function | No | Custom handler for blocked content | | onFlag | function | No | Custom handler for flagged content | | onWarn | function | No | Custom handler for warned content |


Dynamic Field Extraction

For complex request structures, use a function:

app.post('/api/posts',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'social-media',
    field: async (req) => {
      // Combine multiple fields
      const { title, body, tags } = req.body
      return `${title}\n\n${body}\n\nTags: ${tags.join(', ')}`
    }
  }),
  yourHandler
)

Accessing the Decision

The moderation result is attached to the request:

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content'
  }),
  async (req, res) => {
    const { moderationResult } = req as any

    // Available fields
    console.log(moderationResult.decisionId)   // UUID for audit trail
    console.log(moderationResult.action)       // 'allow' | 'warn' | 'flag' | 'block'
    console.log(moderationResult.safe)         // boolean
    console.log(moderationResult.flagged)      // boolean
    console.log(moderationResult.categories)   // Array of { category, score, triggered }
    console.log(moderationResult.latency)      // Response time in ms

    // Store decision ID for compliance
    await db.posts.create({
      content: req.body.content,
      userId: req.user.id,
      moderationDecisionId: moderationResult.decisionId,
      moderationAction: moderationResult.action
    })

    res.json({ success: true })
  }
)

Graduated Response Handling

Handle each action type differently:

app.post('/api/messages',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'messaging',
    field: 'body.message',

    onBlock: (req, res, result) => {
      // Hard block - content violates policy
      res.status(403).json({
        error: 'Message blocked',
        reason: 'Content violates community guidelines',
        decisionId: result.decisionId
      })
    },

    onFlag: (req, res, result) => {
      // Queue for human review but allow message
      queueForReview({
        decisionId: result.decisionId,
        content: req.body.message,
        categories: result.categories.filter(c => c.triggered)
      })
      // Continues to handler
    },

    onWarn: (req, res, result) => {
      // Add warning header but allow
      res.setHeader('X-Content-Warning', 'Please be mindful of community guidelines')
      // Continues to handler
    }
  }),
  async (req, res) => {
    // Message allowed (or was flag/warn)
    await sendMessage(req.body.message)
    res.json({ sent: true })
  }
)

Error Handling

The middleware fails open by default (errors allow content through):

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content'
  }),
  async (req, res) => {
    const { moderationResult } = req as any

    if (!moderationResult) {
      // Moderation failed - log but allow through
      console.warn('Moderation unavailable, allowing content')
    }

    // Your logic
    res.json({ success: true })
  }
)

To fail closed (block on errors), handle it explicitly:

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content'
  }),
  async (req, res) => {
    const { moderationResult } = req as any

    if (!moderationResult) {
      // Fail closed - reject if moderation unavailable
      return res.status(503).json({ error: 'Content moderation unavailable' })
    }

    res.json({ success: true })
  }
)

Multiple Content Fields

Moderate multiple fields in the same request:

import { ModerationClient } from '@vettly/express'

const client = new ModerationClient({ apiKey: process.env.VETTLY_API_KEY! })

app.post('/api/profiles', async (req, res) => {
  const { displayName, bio, website } = req.body

  // Check each field
  const [nameResult, bioResult] = await Promise.all([
    client.check({ content: displayName, policyId: 'usernames' }),
    client.check({ content: bio, policyId: 'bios' })
  ])

  if (nameResult.action === 'block' || bioResult.action === 'block') {
    return res.status(403).json({
      error: 'Profile content blocked',
      decisions: {
        displayName: nameResult.decisionId,
        bio: bioResult.decisionId
      }
    })
  }

  // Save profile with decision IDs
  await db.profiles.create({
    ...req.body,
    moderationDecisions: {
      displayName: nameResult.decisionId,
      bio: bioResult.decisionId
    }
  })

  res.json({ success: true })
})

Protecting Multiple Routes

Apply to all routes matching a pattern:

// Moderate all /api/ugc/* routes
app.use('/api/ugc',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'user-content',
    field: (req) => req.body.content || req.body.text || ''
  })
)

// Individual routes inherit moderation
app.post('/api/ugc/comments', saveComment)
app.post('/api/ugc/reviews', saveReview)
app.post('/api/ugc/posts', savePost)

TypeScript Support

Full TypeScript support with typed request:

import { Request, Response, NextFunction } from 'express'
import { moderateContent } from '@vettly/express'
import type { CheckResponse } from '@vettly/sdk'

interface ModeratedRequest extends Request {
  moderationResult?: CheckResponse
}

app.post('/api/comments',
  moderateContent({
    apiKey: process.env.VETTLY_API_KEY!,
    policyId: 'community-safe',
    field: 'body.content'
  }),
  async (req: ModeratedRequest, res: Response) => {
    const { moderationResult } = req

    if (moderationResult) {
      console.log(`Decision: ${moderationResult.decisionId}`)
    }

    res.json({ success: true })
  }
)

Re-exported SDK

The SDK client is re-exported for convenience:

import { ModerationClient, moderateContent } from '@vettly/express'

// Use middleware for route protection
app.post('/comments', moderateContent({ ... }), handler)

// Use client directly for complex flows
const client = new ModerationClient({ apiKey: '...' })
const result = await client.check({ content, policyId })

Get Your API Key

  1. Sign up at vettly.dev
  2. Go to Dashboard > API Keys
  3. Create and copy your key

Links