@axonsdk/inference
v0.1.6
Published
OpenAI-compatible inference endpoint that automatically routes requests to the fastest, cheapest available compute backend.
Maintainers
Readme
@axonsdk/inference
OpenAI-compatible inference routing for AxonSDK. Routes chat completion requests across decentralized compute providers (io.net, Akash, Acurast) with automatic failover and latency-aware routing.
Installation
npm install @axonsdk/inferenceQuick Start
import { AxonInferenceHandler } from '@axonsdk/inference';
const handler = new AxonInferenceHandler({
apiKey: process.env.AXON_SECRET_KEY!,
ionetEndpoint: process.env.IONET_INFERENCE_URL,
akashEndpoint: process.env.AKASH_INFERENCE_URL,
strategy: 'latency', // or 'cost'
});
// Next.js App Router
export const POST = (req: Request) => handler.handleRequest(req);
export const GET = (req: Request) => handler.handleRequest(req);Environment Variables
| Variable | Description |
|---|---|
| IONET_INFERENCE_URL | io.net inference endpoint |
| AKASH_INFERENCE_URL | Akash inference endpoint |
| ACURAST_WS_URL | Acurast WebSocket endpoint |
Routing Strategies
latency(default) — picks the provider with the lowest exponential moving average response timecost— prefers providers in cost order: io.net → Akash → Acurast
License
Apache-2.0
