@vettly/sdk
v0.2.8
Published
Content moderation SDK for apps. Filtering, reporting, blocking, and audit trails.
Maintainers
Keywords
Readme
@vettly/sdk
Content moderation SDK for apps with user-generated content. Filtering, reporting, blocking, and audit trails in one package.
UGC Moderation Essentials
Apps with user-generated content need four things to stay compliant and keep users safe:
| Requirement | Vettly API |
|-------------|------------|
| Content filtering | POST /v1/check — screen text, images, and video |
| User reporting | POST /v1/reports — let users report offensive content |
| User blocking | POST /v1/blocks — block abusive users |
| Audit trail | Every decision logged with timestamps and policy versions |
Swift (REST API)
// Content filtering
let url = URL(string: "https://api.vettly.dev/v1/check")!
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try JSONEncoder().encode([
"content": userMessage,
"contentType": "text",
"policyId": "app-store"
])
let (data, _) = try await URLSession.shared.data(for: request)
let decision = try JSONDecoder().decode(ModerationDecision.self, from: data)
if decision.action == "block" { /* remove content */ }React Native / Expo
import { createClient } from '@vettly/sdk'
const client = createClient('vettly_live_...')
// Content filtering
const result = await client.check({
content: userMessage,
policyId: 'app-store'
})
if (result.action === 'block') { /* remove content */ }Installation
npm install @vettly/sdkThe Full Picture
Vettly is decision infrastructure, not just a classification API. Every API call returns a decisionId that links to:
- The exact policy version applied (immutable, timestamped)
- Category scores and thresholds that triggered the action
- Content fingerprint for tamper-evident verification
- Evidence URLs for visual content (screenshots, frames)
const { decisionId, action, categories } = await client.check({
content: userContent,
policyId: 'app-store'
})
// Store decisionId with your content for audit trail
await db.posts.create({
content: userContent,
moderationDecisionId: decisionId,
action,
})
// Later: retrieve for compliance or audit
const decision = await client.getDecision(decisionId)
console.log(decision.decision.policy.version) // Exact policy version
console.log(decision.decision.categories) // All scores and thresholdsQuick Start
import { createClient } from '@vettly/sdk'
const client = createClient('vettly_live_...')
const result = await client.check({
content: 'User-generated text',
policyId: 'community-safe'
})
if (result.action === 'block') {
// Content blocked - decisionId available for audit trail
console.log(`Blocked: ${result.decisionId}`)
}Get Your API Key
- Sign up at vettly.dev
- Go to Dashboard > API Keys
- Create and copy your key
Classification APIs vs Decision Infrastructure
| Capability | Classification APIs | Vettly | |------------|---------------------|--------| | Content scoring | Yes | Yes | | Graduated actions (allow/warn/flag/block) | Sometimes | Yes | | Decision audit trail | No | Every decision | | Policy versioning | No | Immutable versions | | Content fingerprinting | No | SHA256 hash | | Evidence preservation | No | Signed URLs | | Policy replay | No | Re-evaluate with new policy | | DSA Article 17 ready | No | Yes |
Core Capabilities
Text Decisions
const result = await client.check({
content: 'User-generated text',
policyId: 'community-safe',
contentType: 'text'
})
console.log(result.action) // 'allow' | 'warn' | 'flag' | 'block'
console.log(result.categories) // Array of { category, score, triggered }
console.log(result.decisionId) // UUID for audit trail
console.log(result.latency) // Response time in ms
console.log(result.cost) // USD cost for this decisionFast Path (Sub-100ms)
For latency-sensitive applications like live chat:
const result = await client.checkFast({
content: 'User message',
policyId: 'chat-filter'
})
// Optimized endpoint targeting <100ms response timesImage Decisions
// From URL
const result = await client.checkImage(
'https://example.com/image.jpg',
{ policyId: 'strict' }
)
// From base64
const result = await client.checkImage(
'data:image/jpeg;base64,/9j/4AAQ...',
{ policyId: 'strict' }
)Multi-Modal Decisions
Check text, images, and video in a single request:
const result = await client.checkMultiModal({
text: 'Post caption',
images: [
'https://cdn.example.com/image1.jpg',
'data:image/png;base64,...'
],
video: 'https://cdn.example.com/video.mp4',
policyId: 'social-media',
context: {
useCase: 'social_post',
userId: 'user_123',
locale: 'en-US'
}
})
// Individual results for each content type
result.results.forEach(item => {
console.log(`${item.contentType}: ${item.action}`)
if (item.evidence?.url) {
console.log(` Evidence: ${item.evidence.url}`)
}
})
// Overall decision (most severe action)
console.log(`Overall: ${result.action}`)Batch Operations
Process multiple items efficiently:
// Synchronous batch
const batchResult = await client.batchCheck({
policyId: 'community-safe',
items: [
{ id: 'post-1', content: 'First post' },
{ id: 'post-2', content: 'Second post' },
{ id: 'post-3', content: 'Third post', contentType: 'image' }
]
})
console.log(batchResult.batchId)
console.log(batchResult.results) // Array of results with matching IDs
console.log(batchResult.totalCost)
// Asynchronous batch with webhook delivery
const asyncBatch = await client.batchCheckAsync({
policyId: 'community-safe',
items: [...],
webhookUrl: 'https://your-app.com/webhooks/batch-complete'
})
console.log(asyncBatch.batchId)
console.log(asyncBatch.estimatedCompletion)Policy Replay
Re-evaluate historical decisions with different policies:
// User appeals a decision - test with updated policy
const replay = await client.replayDecision(
'original-decision-id',
'community-guidelines-v3' // New policy version
)
// Compare outcomes
console.log(`Original: ${originalDecision.action}`)
console.log(`With new policy: ${replay.action}`)Dry Run
Test policies without making provider calls:
const dryRun = await client.dryRun('new-policy-id', {
hate_speech: 0.7,
harassment: 0.3,
violence: 0.1
})
console.log(dryRun.evaluation.action) // What action would be taken
console.log(dryRun.evaluation.categories) // Which thresholds triggerStreaming & Real-Time
For real-time moderation and progress tracking:
import { createStreamingClient } from '@vettly/sdk'
const streaming = createStreamingClient('vettly_live_...')
// Fast path for sub-100ms responses
const fastResult = await streaming.checkFast({
content: 'Live chat message',
policyId: 'chat-filter'
})
// Stream moderation progress (video/multimodal)
for await (const event of streaming.streamProgress(decisionId)) {
if (event.type === 'progress') {
console.log(`${event.step}: ${event.percent}%`)
} else if (event.type === 'frame_result') {
console.log(`Frame ${event.frameNumber}: ${event.action}`)
} else if (event.type === 'complete') {
console.log(`Done: ${event.decisionId}`)
}
}
// OpenAI-compatible streaming format
for await (const chunk of streaming.checkStream({ content, policyId })) {
if (chunk.object === 'moderation.chunk') {
console.log('Partial:', chunk.choices?.[0]?.delta)
} else if (chunk.object === 'moderation.result') {
console.log('Final:', chunk.action)
}
}WebSocket Real-Time Connection
For high-frequency moderation (chat, live streams):
import { createStreamingClient } from '@vettly/sdk'
const streaming = createStreamingClient('vettly_live_...')
const realtime = streaming.connectRealtime({
policyId: 'live-chat',
onResult: (result) => {
console.log(`Decision: ${result.action} (${result.latency}ms)`)
},
onError: (error) => console.error(error),
onConnect: () => console.log('Connected'),
onDisconnect: () => console.log('Disconnected')
})
await realtime.connect()
// Send messages for moderation
const result = await realtime.moderate('User message here')
console.log(result.action)
// Subscribe to all decisions (for dashboards)
realtime.subscribe()
// Cleanup
realtime.close()Decision Retrieval & Audit
Get Decision Details
const decision = await client.getDecision('decision-uuid')
console.log(decision.decision.id)
console.log(decision.decision.action)
console.log(decision.decision.categories)
console.log(decision.decision.provider)
console.log(decision.decision.latency)
console.log(decision.decision.cost)
console.log(decision.decision.createdAt)List Recent Decisions
const decisions = await client.listDecisions({
limit: 50,
offset: 0
})
decisions.decisions.forEach(d => {
console.log(`${d.id}: ${d.action} (${d.createdAt})`)
})
console.log(`Total: ${decisions.total}`)Get cURL for Debugging
const curl = await client.getCurlCommand('decision-uuid')
console.log(curl)
// curl -X POST https://api.vettly.dev/v1/check ...Policy Management
Create or Update Policy
const policy = await client.createPolicy('my-policy', `
name: My Community Policy
version: "1.0"
rules:
- category: hate_speech
threshold: 0.7
provider: openai
action: block
- category: harassment
threshold: 0.8
provider: openai
action: flag
fallback:
provider: mock
on_timeout: true
timeout_ms: 5000
`)
console.log(policy.policy.version) // Immutable version hashValidate Policy Without Saving
const validation = await client.validatePolicy(yamlContent)
if (!validation.valid) {
console.error('Errors:', validation.errors)
}List Policies
const policies = await client.listPolicies()
policies.policies.forEach(p => {
console.log(`${p.id}: v${p.version} (${p.updatedAt})`)
})Webhooks
Register a Webhook
const webhook = await client.registerWebhook({
url: 'https://your-app.com/webhooks/vettly',
events: ['decision.blocked', 'decision.flagged'],
description: 'Production webhook for blocked content'
})
console.log(webhook.webhook.id)
console.log(webhook.webhook.secret) // Use for signature verificationWebhook Signature Verification
import { verifyWebhookSignature, constructWebhookEvent } from '@vettly/sdk'
app.post('/webhooks/vettly', async (req, res) => {
const signature = req.headers['x-vettly-signature']
const payload = req.rawBody // Raw body as string
const isValid = await verifyWebhookSignature(
payload,
signature,
process.env.VETTLY_WEBHOOK_SECRET
)
if (!isValid) {
return res.status(401).send('Invalid signature')
}
const event = constructWebhookEvent(payload)
switch (event.type) {
case 'decision.blocked':
// Handle blocked content
await notifyModerator(event.data)
break
case 'decision.flagged':
// Queue for human review
await addToReviewQueue(event.data)
break
}
res.status(200).send('OK')
})Manage Webhooks
// List all webhooks
const webhooks = await client.listWebhooks()
// Update a webhook
await client.updateWebhook('webhook-id', {
events: ['decision.blocked'],
enabled: true
})
// Test a webhook
const test = await client.testWebhook('webhook-id', 'decision.blocked')
console.log(`Test ${test.success ? 'passed' : 'failed'}: ${test.statusCode}`)
// View delivery logs
const deliveries = await client.getWebhookDeliveries('webhook-id', { limit: 10 })
// Delete a webhook
await client.deleteWebhook('webhook-id')Idempotency
Prevent duplicate processing with request IDs:
const result = await client.check(
{ content: 'Hello', policyId: 'default' },
{ requestId: 'unique-request-id-123' }
)
// Same requestId returns cached result
const duplicate = await client.check(
{ content: 'Hello', policyId: 'default' },
{ requestId: 'unique-request-id-123' }
)
console.log(result.decisionId === duplicate.decisionId) // trueError Handling
The SDK provides typed errors for precise handling:
import {
VettlyError,
VettlyAuthError,
VettlyRateLimitError,
VettlyQuotaError,
VettlyValidationError
} from '@vettly/sdk'
try {
await client.check({ content: 'test', policyId: 'default' })
} catch (error) {
if (error instanceof VettlyAuthError) {
// Invalid or expired API key
console.log('Auth error:', error.message)
} else if (error instanceof VettlyRateLimitError) {
// Rate limited - SDK retries automatically, this means retries exhausted
console.log(`Rate limited. Retry after ${error.retryAfter}s`)
} else if (error instanceof VettlyQuotaError) {
// Monthly quota exceeded
console.log(`Quota: ${error.quota?.used}/${error.quota?.limit}`)
console.log(`Resets: ${error.quota?.resetDate}`)
} else if (error instanceof VettlyValidationError) {
// Invalid request parameters
console.log('Validation errors:', error.errors)
} else if (error instanceof VettlyError) {
// Other API errors
console.log(`${error.code}: ${error.message}`)
}
}Express Middleware
import { createClient, moderateContent } from '@vettly/sdk'
const client = createClient('vettly_live_...')
app.post('/comments',
moderateContent({
client,
policyId: 'community-safe',
field: 'body.comment', // Optional: path to content field
onFlagged: (req, res, result) => {
// Custom handling for flagged content
res.status(200).json({ warning: 'Content flagged for review' })
}
}),
(req, res) => {
// Content passed moderation
// req.moderationResult available with decisionId
}
)Configuration
import { ModerationClient } from '@vettly/sdk'
const client = new ModerationClient({
apiKey: 'vettly_live_...',
apiUrl: 'https://api.vettly.dev', // Optional: custom API URL
timeout: 30000, // Request timeout in ms (default: 30000)
maxRetries: 3, // Max retries for failures (default: 3)
retryDelay: 1000, // Base delay for backoff in ms (default: 1000)
})Analytics
const usage = await client.getUsageStats(30) // Last 30 days
console.log('Text:', usage.usage.text.count, 'decisions, $' + usage.usage.text.cost)
console.log('Image:', usage.usage.image.count, 'decisions, $' + usage.usage.image.cost)
console.log('Video:', usage.usage.video.count, 'decisions, $' + usage.usage.video.cost)
console.log('Period:', usage.period.start, 'to', usage.period.end)Response Format
{
"decisionId": "550e8400-e29b-41d4-a716-446655440000",
"action": "block",
"safe": false,
"flagged": true,
"categories": [
{ "category": "hate_speech", "score": 0.91, "triggered": true },
{ "category": "harassment", "score": 0.08, "triggered": false }
],
"provider": "openai",
"latency": 147,
"cost": 0.000025
}Who Uses Vettly
Trust & Safety Teams
- Audit trail for every decision
- Policy version history
- Evidence preservation for appeals
- Bulk replay for policy changes
Legal & Compliance
- DSA Article 17 compliance
- Content fingerprinting for integrity
- Decision records for legal discovery
- Policy approval workflows
Engineering Teams
- TypeScript-first SDK
- Automatic retries with exponential backoff
- Typed errors for precise handling
- Express middleware included
Links
- vettly.dev - Sign up
- docs.vettly.dev - Documentation
- Dashboard - Manage policies and review decisions
