@inception-agents/vercel
v0.1.0
Published
Next.js middleware integration for Inception Agents. Detects AI agent traffic at the edge and serves optimized content without affecting human visitors.
Readme
@inception-agents/vercel
Next.js middleware integration for Inception Agents. Detects AI agent traffic at the edge and serves optimized content without affecting human visitors.
Installation
npm install @inception-agents/vercelPeer dependency: next >= 14.0.0
Quick Start
Create or update your middleware.ts at the project root:
import { withInception } from '@inception-agents/vercel';
export const middleware = withInception({
apiKey: process.env.INCEPTION_API_KEY!,
});
export const config = {
matcher: ['/((?!_next/static|_next/image|favicon.ico).*)'],
};That's it. Human traffic passes through untouched. AI agents receive optimized content automatically.
API Reference
withInception(options)
Creates a Next.js-compatible middleware function that wraps the full Inception pipeline. When the pipeline returns optimized content, that response is served directly. Otherwise, Next.js continues handling the request normally.
import { withInception } from '@inception-agents/vercel';
export const middleware = withInception({
apiKey: process.env.INCEPTION_API_KEY!,
bypassPaths: ['/api/webhooks'],
});Returns: (request: Request) => Promise<Response> — a function compatible with Next.js middleware.ts export.
createInceptionMiddleware(options)
Lower-level alternative that returns null for human traffic instead of a pass-through response. Useful when you need to compose with other middleware.
import { createInceptionMiddleware } from '@inception-agents/vercel';
import { NextResponse } from 'next/server';
const inception = createInceptionMiddleware({
apiKey: process.env.INCEPTION_API_KEY!,
});
export async function middleware(request: Request) {
const agentResponse = await inception(request);
if (agentResponse) return agentResponse;
// Your custom middleware logic here
return NextResponse.next();
}Returns: (request: Request) => Promise<Response | null>
Configuration
VercelMiddlewareOptions
Extends InceptionConfig from @inception-agents/core.
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| apiKey | string | required | Your Inception Agents API key |
| bypassPaths | string[] | [] | Additional paths to skip (beyond core defaults) |
| debug | boolean | false | Enable debug logging |
| excludePaths | string[] | ['/api/', '/_next/', ...] | Paths that bypass detection |
| detectionThreshold | number | 0.7 | Minimum confidence to classify as agent |
| cacheMaxAge | number | 300 | Cache-Control max-age (seconds) |
| enableLlmsTxt | boolean | true | Serve /llms.txt and /llms-full.txt |
| enableJsonLd | boolean | true | Serve JSON-LD enrichment for agents |
| enableAgentCard | boolean | false | Serve /.well-known/agent.json |
Exported Types
InceptionConfig— Base configuration interfaceAgentDetectionResult— Detection result with agent identity and confidenceIntentClassification— Classified agent intentVercelMiddlewareOptions— Vercel-specific options extendingInceptionConfig
How It Works
- Every request passes through the middleware at the Vercel edge
- The Inception pipeline checks the request for AI agent signals (user-agent, IP range, headers)
- If an agent is detected above the confidence threshold, optimized content is served directly
- If the visitor is human, the middleware passes through with zero latency impact
- Analytics signals are collected asynchronously (fire-and-forget)
