ironaai
v0.0.27
Published
Irona AI provides intelligent routing for your queries, dynamically selecting the best language model based on criteria like cost, latency, and efficiency. Our SDK offers clients the flexibility to choose from a variety of LLM models, while also providing
Maintainers
Readme
IronaAI Node SDK
This library provides convenient access to the IronaAI's model-routing API from TypeScript or JavaScript. We help you select the best AI model for your specific use case, optimizing for factors like cost, latency, or performance.
Installation
npm install ironaaiQuick Start
To use the API, you need to sign up for a IronaAI account & obtain an API key. Sign up here.
Basic Usage
Here's a simple example of how to use IronaAI's model-routing to select the best model between GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro, while optimizing for latency and outputting the raw text:
import { IronaAI } from 'ironaai';
const ironaAI = new IronaAI({
// Optional - automatically loads from environment variable
apiKey: process.env.IRONAAI_API_KEY,
});
async function basicExample() {
// 1. Select the best model
const result = await ironaAI.completions.create({
// Define the user's message
messages: [{ content: 'What is the golden ratio?', role: 'user' }],
// Specify the LLM providers and models to choose from
llmProviders: [
{ provider: 'openai', model: 'gpt-4o-2024-05-13' },
{ provider: 'anthropic', model: 'claude-3-5-sonnet-20240620' },
{ provider: 'google', model: 'gemini-1.5-pro-latest' },
],
// Set the optimization criteria to latency
tradeoff: 'latency',
});
// 2. Handle potential errors
if ('error' in result) {
console.error('Error:', result.error);
return;
}
// 3. Log the results
// Display the text response
console.log('LLM output:', result.content);
// Display the selected provider(s)
console.log('Selected providers:', result.providers);
}
basicExample();Gateway Support
IronaAI works with any OpenAI-compatible gateway. When a gateway is configured, all LLM calls route through it instead of individual provider APIs — no provider-specific API keys needed.
Supported Gateways
| Gateway | Base URL | Model format | includeProviderInModelName |
| ------------------------------------ | ------------------------------- | ---------------- | ---------------------------- |
| OpenRouter | https://openrouter.ai/api/v1 | provider/model | true (default) |
| Requesty | https://router.requesty.ai/v1 | provider/model | true (default) |
| LLM Gateway | https://api.llmgateway.io/v1 | raw model name | false |
Any other OpenAI-compatible gateway works the same way — just set the base URL and API key.
Configuration
Via environment variables (simplest):
LLM_GATEWAY_BASE_URL='https://router.requesty.ai/v1'
LLM_GATEWAY_API_KEY='your-gateway-api-key'
LLM_GATEWAY_INCLUDE_PROVIDER_IN_MODEL_NAME='true' # set 'false' for gateways that expect raw model namesVia config object:
import { IronaAI } from 'ironaai';
const ironaAI = await IronaAI.createInstance({
apiKey: process.env.IRONAAI_API_KEY,
gateway: {
baseUrl: 'https://router.requesty.ai/v1',
apiKey: process.env.LLM_GATEWAY_API_KEY!,
},
});OpenRouter with optional headers:
const ironaAI = await IronaAI.createInstance({
apiKey: process.env.IRONAAI_API_KEY,
gateway: {
baseUrl: 'https://openrouter.ai/api/v1',
apiKey: process.env.OPENROUTER_API_KEY!,
headers: {
'HTTP-Referer': 'https://your-app.example',
'X-Title': 'Your App Name',
},
},
});Notes
- If
gatewayis set, provider-specific API keys (OPENAI_API_KEY,ANTHROPIC_API_KEY, etc.) are not required. - If
gatewayis not set, the SDK uses provider-specific API keys as before. - OpenRouter-specific env fallbacks are also supported:
OPENROUTER_BASE_URL,OPENROUTER_API_KEY,OPENROUTER_HTTP_REFERER,OPENROUTER_X_TITLE. - Model name format:
LLM_GATEWAY_INCLUDE_PROVIDER_IN_MODEL_NAME=true(default) — sendsopenai/gpt-4o-mini(works for OpenRouter, Requesty)LLM_GATEWAY_INCLUDE_PROVIDER_IN_MODEL_NAME=false— sendsgpt-4o-mini(works for LLM Gateway and gateways expecting raw model names)
Build & test Instructions
For local building & testing the package without publishing on npm.
Shortcut command: npm run eg-test
This does the following things in 1 go:
npm run build
npm link # soft link for local for ironaai package.
cd example // go to run scripts
npm link ironaai // linked local package is installed for use now. (equivalent to `npm install ironaai` for local testing)and
For published versions we can use the following:
npm install ironaai # in this case sdk must be published by npm publish
Ref blog link.
Publish Package to npm
Publishing uses OIDC trusted publishing — no npm tokens are needed. The GitHub Actions workflow authenticates directly with npm via OpenID Connect.
Prerequisites (one-time setup)
- npm trusted publisher configured on npmjs.com/package/ironaai/access (GitHub org:
Irona-ai, repo:irona-node-sdk, workflow:publish.yml, environment:npm) - GitHub environment
npmcreated in repo Settings > Environments with branch policy restricted tomain
Option A: Manual publish
- Update the version in
package.json - Build the package:
npm run build - Verify what will be published:
npm publish --dry-run - Publish to npm (must be logged in via
npm login):npm publish
Option B: Automated publish via GitHub Release
- Update the version in
package.json - Commit and push changes to
main - Create a GitHub Release with tag matching the version:
Or via GitHub UI: Go to Releases > "Create a new release" > Enter tag (e.g.,gh release create v0.0.23 --title "v0.0.23" --notes "Release notes here" --target mainv0.0.23) > Publish
The release triggers the CI workflow (.github/workflows/publish.yml) which builds and publishes to npm automatically with provenance.
Troubleshooting
NODE_AUTH_TOKENmust NOT be set — even an empty string prevents OIDC from working. The workflow intentionally omits it.repository.urlinpackage.jsonmust exactly match the GitHub repo URL (case-sensitive) for provenance validation.- npm version — OIDC trusted publishing requires npm >= 11.5.1. The workflow upgrades npm automatically before publishing.
Key Concepts
models: An array of AI providers and models you want LLM-Routing to be done from.
tradeoff: The factor to optimize for (e.g., 'latency', 'cost', 'performance').
Error Handling IronaAI uses typed responses. If there's an error, the response will have a
errorproperty with the error message. Always check for this property when handling responses.
Picks up pricing from env variable if available from SUPPORTED_MODELS_URL
Support
If you encounter any issues or have questions, please open an issue on our GitHub repository or email us at [email protected].
License
This library is released under the MIT License.
