llm-consensus
v1.0.0
Published
Ask a question to a council of LLMs, get a consensus response back. Available as both a Docker container, as well as a function you can include in your code to start a local server. Ready for verified deployment in EigenCloud.
Readme
Verified LLM Gateway
A unified HTTP gateway for querying multiple LLM providers (OpenAI, Claude, Gemini, Perplexity) with structured logging for verified execution environments.
Overview
This service allows callers to send requests to various LLMs through a single API endpoint. It's designed for deployment in verified execution environments where logs are publicly visible, enabling verification of what was sent to LLMs and what was returned.
Supported Providers
- OpenAI - GPT-4, GPT-4o, etc.
- Claude - Anthropic's Claude models
- Gemini - Google's Gemini models
- Perplexity - Perplexity AI models
Using as an NPM Module
Install the package:
npm install verified-llmOption 1: Start Server Programmatically
import { startServer } from 'verified-llm';
// Start the server with custom options
const server = await startServer({
port: 3000,
host: '0.0.0.0',
logger: true,
});
// Server is now running on http://localhost:3000
// To stop the server:
await server.close();Option 2: Get Fastify App for Custom Integration
import { buildApp } from 'verified-llm';
const app = await buildApp({ port: 3000 });
// Add your own routes or middleware
app.get('/custom', async () => ({ message: 'Custom route' }));
// Start listening
await app.listen({ port: 3000, host: '0.0.0.0' });Option 3: Use Core Library Only
import { LLMService, llmService } from 'verified-llm/lib';
// Use the singleton instance
const response = await llmService.handleRequest({
provider: 'openai',
apiKey: 'your-api-key',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.content);Available Exports
From verified-llm:
startServer(options?)- Start the HTTP serverbuildApp(options?)- Get Fastify instance- All exports from
verified-llm/lib
From verified-llm/lib:
LLMService,llmService- Service class and singletonOpenAIProvider,ClaudeProvider,GeminiProvider,PerplexityProvider- Provider implementationsQueryRequestSchema,QueryResponseSchema,MessageSchema- Zod schemaslogLLMRequest,logLLMError- Logging utilities- Type exports:
LLMRequest,LLMResponse,LLMProvider,Message, etc.
API Reference
POST /query
Send a request to an LLM provider.
Headers:
x-api-key(required) - API key for the target provider
Body:
{
"provider": "openai",
"model": "gpt-4o",
"messages": [
{ "role": "user", "content": "Hello!" }
],
"temperature": 0.7,
"maxTokens": 1000
}Response:
{
"content": "Hello! How can I help you today?",
"metadata": {
"provider": "openai",
"model": "gpt-4o",
"usage": {
"promptTokens": 10,
"completionTokens": 12,
"totalTokens": 22
}
}
}GET /health
Health check endpoint. Returns { "status": "ok" }.
GET /docs
Swagger UI documentation.
Development
Setup & Local Testing
npm install
cp .env.example .env
npm run devDocker Testing
docker build -t my-app .
docker run --rm --env-file .env my-appRunning Tests
npm testBuilding
npm run buildPrerequisites
Before deploying, you'll need:
- Docker - To package and publish your application image
- ETH - To pay for deployment transactions
Deployment
ecloud compute app deploy username/image-nameThe CLI will automatically detect the Dockerfile and build your app before deploying.
Management & Monitoring
ecloud compute app list # List all apps
ecloud compute app info [app-name] # Get app details
ecloud compute app logs [app-name] # View logs
ecloud compute app start [app-name] # Start stopped app
ecloud compute app stop [app-name] # Stop running app
ecloud compute app terminate [app-name] # Terminate app
ecloud compute app upgrade [app-name] [image] # Update deploymentArchitecture
src/
├── index.ts # Entry point, exports startServer()
├── app.ts # Fastify app configuration
├── config/
│ └── env.ts # Environment configuration
├── lib/ # Core library (npm packageable)
│ ├── index.ts # Library exports
│ ├── types.ts # TypeScript interfaces
│ ├── schemas.ts # Zod validation schemas
│ ├── service.ts # LLMService orchestration
│ ├── logger.ts # Structured logging
│ └── providers/ # LLM provider adapters
│ ├── openai.ts
│ ├── claude.ts
│ ├── gemini.ts
│ └── perplexity.ts
└── routes/
└── query.ts # POST /query endpoint