@mira.network/gatekeeper
v1.0.7
Published
Auth + rate limit middleware for Hono
Downloads
56
Maintainers
Readme
@mira.network/gatekeeper
Authentication, rate limiting, and AI Gateway middleware for Hono.
Overview
Gatekeeper provides middleware for:
- Initialization - Request tracing and OpenAI client setup for AI Gateway
- Authentication - API key validation via Console Service
- Credit checking - Balance verification before processing
- Rate limiting - Per-minute and per-day limits
Installation
npm install @mira.network/gatekeeperQuick Start
import { Hono } from 'hono';
import {
init,
auth,
rateLimit,
type AuthContext,
type MiraContext,
} from '@mira.network/gatekeeper';
const app = new Hono<{
Bindings: Env;
Variables: { auth: AuthContext; mira: MiraContext };
}>();
// 1. Initialize Mira context (top of middleware stack)
app.use('/*', init((c) => ({
aiGateway: {
accountId: c.env.CF_ACCOUNT_ID,
gatewayId: c.env.CF_GATEWAY_ID,
token: c.env.CF_AI_GATEWAY_TOKEN,
},
})));
// 2. Apply auth and rate limiting to protected routes
app.use('/api/*', auth({ serviceId: 'verify' }));
app.use('/api/*', rateLimit({ rpm: 60, rpd: 1000 }));
app.post('/api/endpoint', (c) => {
const { traceId, openai } = c.get('mira');
const { keyId, appId } = c.get('auth');
// Use pre-configured OpenAI client
// All requests include traceId + auth metadata in AI Gateway logs
return c.json({ message: 'Success!' });
});
export default app;API Reference
init(getOptions)
Creates middleware that initializes the Mira context with a unique trace ID and pre-configured OpenAI client.
init((c) => ({
aiGateway: {
accountId: 'your-cf-account-id',
gatewayId: 'your-gateway-id',
token: c.env.CF_AI_GATEWAY_TOKEN,
},
}))What it does:
- Generates a unique
traceId(UUID) for request tracing - Creates an OpenAI client configured for Cloudflare AI Gateway
- Attaches
traceIdto all AI Gateway requests viacf-aig-metadataheader - Sets mira context on the request
Mira Context:
const mira = c.get('mira');
// {
// traceId: string; // Unique request ID
// openai: OpenAI; // Pre-configured OpenAI client
// _aiGatewayConfig: {...}; // Internal config (don't use directly)
// }auth(options)
Creates middleware that validates API keys.
auth({
serviceId: 'verify', // Required: Service identifier
})What it does:
- Extracts Bearer token from
Authorizationheader - Validates token prefix matches service (e.g.,
mk_verify_...) - Calls Console Service to validate the full key
- Checks if the app has sufficient credits
- Sets auth context on the request
- Auto-enriches mira context with auth metadata (if
initwas used)
Auth Context:
const auth = c.get('auth');
// {
// keyId: string;
// appId: string;
// userId: string;
// }rateLimit(options)
Creates middleware that enforces rate limits.
rateLimit({
rpm: 60, // Requests per minute
rpd: 1000, // Requests per day
})Rate Limit Headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1699999999Error Classes
AuthError
Thrown when authentication fails.
import { AuthError } from '@mira.network/gatekeeper';
// Status: 401
// Body: { error: "Unauthorized", message: "..." }RateLimitError
Thrown when rate limit is exceeded.
import { RateLimitError } from '@mira.network/gatekeeper';
// Status: 429
// Body: { error: "Rate Limit Exceeded", message: "..." }
// Headers: Retry-After: <seconds>InsufficientCreditsError
Thrown when app has insufficient credits.
import { InsufficientCreditsError } from '@mira.network/gatekeeper';
// Status: 402
// Body: { error: "Payment Required", message: "Insufficient credits" }Error Handling
Handle errors in your Hono app:
import { AuthError, RateLimitError, InsufficientCreditsError } from '@mira.network/gatekeeper';
app.onError((err, c) => {
if (err instanceof AuthError) {
return c.json({ error: 'Unauthorized', message: err.message }, 401);
}
if (err instanceof RateLimitError) {
c.header('Retry-After', err.retryAfter.toString());
return c.json({ error: 'Rate Limit Exceeded', message: err.message }, 429);
}
if (err instanceof InsufficientCreditsError) {
return c.json({ error: 'Payment Required', message: err.message }, 402);
}
return c.json({ error: 'Internal Server Error' }, 500);
});Full Example
import { Hono } from 'hono';
import {
init,
auth,
rateLimit,
AuthError,
RateLimitError,
InsufficientCreditsError,
type AuthContext,
type MiraContext,
} from '@mira.network/gatekeeper';
type Env = {
CF_ACCOUNT_ID: string;
CF_GATEWAY_ID: string;
CF_AI_GATEWAY_TOKEN: string;
CONSOLE_SERVICE: Service;
RATE_LIMIT_KV: KVNamespace;
};
const app = new Hono<{
Bindings: Env;
Variables: { auth: AuthContext; mira: MiraContext };
}>();
// Initialize Mira context (traceId + OpenAI client)
app.use('/*', init((c) => ({
aiGateway: {
accountId: c.env.CF_ACCOUNT_ID,
gatewayId: c.env.CF_GATEWAY_ID,
token: c.env.CF_AI_GATEWAY_TOKEN,
},
})));
// Global error handler
app.onError((err, c) => {
if (err instanceof AuthError) {
return c.json({ error: err.message }, 401);
}
if (err instanceof RateLimitError) {
return c.json({ error: err.message, resetAt: err.resetAt }, 429);
}
if (err instanceof InsufficientCreditsError) {
return c.json({ error: 'Insufficient credits', balance: err.balance }, 402);
}
console.error(err);
return c.json({ error: 'Internal error' }, 500);
});
// Health check (no auth required)
app.get('/health', (c) => c.json({ status: 'ok' }));
// Protected routes - auth + rate limit
app.use('/v1/*', auth({ serviceId: 'verify' }));
app.use('/v1/*', rateLimit({ rpm: 60, rpd: 1000 }));
app.post('/v1/chat', async (c) => {
const { traceId, openai } = c.get('mira');
const { keyId, appId } = c.get('auth');
// Use pre-configured OpenAI client
// All requests automatically include traceId + auth metadata in AI Gateway logs
const completion = await openai.chat.completions.create({
model: 'openai/gpt-4o-mini', // AI Gateway unified model format
messages: [{ role: 'user', content: 'Hello!' }],
});
// Report usage back to Console Service
await c.env.CONSOLE_SERVICE.reportUsage(
appId,
'verify',
[{
model: 'openai/gpt-4o-mini',
promptTokens: completion.usage?.prompt_tokens ?? 0,
completionTokens: completion.usage?.completion_tokens ?? 0,
}],
keyId,
`Chat: ${traceId}`
);
return c.json({ result: completion.choices[0].message.content });
});
export default app;Development
cd gatekeeper
npm install
npm run buildPublishing
npm run build
npm publish