nexusify-sdk
v1.0.3
Published
Official Node.js SDK for the Nexusify API — unified access to 84+ AI models including GPT, Gemini, Grok, Llama, Mistral, DeepSeek, and more
Maintainers
Readme
nexusify
Official Node.js SDK for the Nexusify API — unified access to 84 AI models including GPT-5, Gemini, Grok, Llama, Mistral, DeepSeek, and more, through a single OpenAI-compatible interface.
- Features
- Installation
- Getting an API Key
- Quick Start
- Client Configuration
- Chat Completions
- Responses API
- Text Generate (Legacy)
- Image Generation
- Models
- Error Handling
- Model List (84 Models)
- Image Model Pricing
- Credits & Billing
- Rate Limits
Features
- 84 AI text models from OpenAI, Google, xAI, Meta, Mistral, DeepSeek, Cohere, and more
- 5 image generation models including Flux, Klein, and GPT Image
- 100% OpenAI-compatible — drop-in replacement for existing OpenAI SDK integrations
- Streaming support for all text endpoints via async generators
- Vision support — send images alongside text to multimodal models
- Tool calling / function calling on supported models
- Reasoning models with chain-of-thought output (Grok, DeepSeek, Qwen, and more)
- Image auto-download — save generated images directly to disk
- Full TypeScript support with strict types for all inputs and outputs
- CommonJS, ESM, and TypeScript compatible out of the box
Installation
npm install nexusify-sdkyarn add nexusify-sdkpnpm add nexusify-sdkGetting an API Key
- Visit nexusify.co and click Start building.
- Sign in via the portal at login.nexusify.co using Google, Discord, or CAPTCHA.
- Open your dashboard at dash.nexusify.co/dashboard.
- Scroll to API Credentials and click Copy Secret Key.
New accounts receive $25 in free credits every week, renewed automatically every Monday — no credit card required.
Security notice: Never expose your API key in client-side code, browser extensions, or public repositories. Always use environment variables and server-side requests.
export NEXUS_API_KEY="your_api_key_here"Quick Start
CommonJS
const { Nexusify } = require("nexusify-sdk");
const client = new Nexusify({ apiKey: process.env.NEXUS_API_KEY });
async function main() {
const completion = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);
}
main();ESM
import Nexusify from "nexusify-sdk";
const client = new Nexusify({ apiKey: process.env.NEXUS_API_KEY });
const completion = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);TypeScript
import Nexusify, { type ChatCompletion } from "nexusify-sdk";
const client = new Nexusify({ apiKey: process.env.NEXUS_API_KEY as string });
const completion: ChatCompletion = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0]?.message.content);Client Configuration
const client = new Nexusify({
apiKey: "your_api_key", // Required
baseURL: "https://api.nexusify.co/v1", // Optional. Default shown.
timeout: 60000, // Optional. Request timeout in ms. Default: 60000
maxRetries: 2, // Optional. Reserved for future use.
});Chat Completions
The primary endpoint — fully compatible with the OpenAI Chat Completions format.
Endpoint: POST /v1/chat/completions
Basic Request
const completion = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain black holes in simple terms." },
],
temperature: 0.7,
max_tokens: 300,
});
console.log(completion.choices[0].message.content);
console.log(completion.usage.total_tokens);Parameters:
| Name | Type | Required | Description |
|------|------|----------|-------------|
| model | NexusifyModel | Yes | Model ID, e.g. "gemini-2.5-flash". |
| messages | Message[] | Yes | Array of { role, content } objects. |
| temperature | number | No | Sampling temperature 0.0–2.0. Default: 0.7. |
| max_tokens | number | No | Maximum tokens to generate. |
| top_p | number | No | Nucleus sampling threshold. |
| stop | string \| string[] | No | Stop sequences. |
| tools | Tool[] | No | Tool/function definitions. |
| tool_choice | ToolChoice | No | "auto", "none", or { type: "function", function: { name } }. |
| stream | boolean | No | Enable streaming via SSE. |
Streaming Chat
When stream: true, the method returns an async generator that yields ChatCompletionChunk objects.
const stream = await client.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: "Write a poem about the ocean." }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) process.stdout.write(content);
}Vision (Image Input)
Send images alongside text to any model marked with the 🖼️ Vision capability.
const completion = await client.chat.completions.create({
model: "gemini-3-flash-preview",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What is in this image?" },
{ type: "image_url", image_url: { url: "https://example.com/photo.jpg" } },
],
},
],
});To send a local image, encode it as a base64 data URI:
import { readFileSync } from "fs";
const imageData = readFileSync("./photo.jpg").toString("base64");
const completion = await client.chat.completions.create({
model: "kimi-k2.5",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Describe this image." },
{ type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData}` } },
],
},
],
});Graceful degradation: If you send an image to a model that does not support vision (e.g.
deepseek-v3.2), the image content is silently dropped and only the text portion is forwarded. No error is returned.
Tool Calling
Supported on kimi-k2, kimi-k2.5, minimax-m2, and gemini-3-flash-preview.
const completion = await client.chat.completions.create({
model: "kimi-k2",
messages: [{ role: "user", content: "What is the weather in Madrid?" }],
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string", description: "The city name" },
},
required: ["city"],
},
},
},
],
tool_choice: "auto",
});
const message = completion.choices[0].message;
if (message.tool_calls) {
const call = message.tool_calls[0];
console.log("Function:", call.function.name);
console.log("Args:", JSON.parse(call.function.arguments));
}Graceful degradation: If you pass
toolsto a model that does not support tool calling, the parameter is silently dropped and the model responds as normal text generation.
Responses API
The modern OpenAI-style Responses API returns a structured output array with typed items, making it ideal for agentic workflows and multi-step applications.
Endpoint: POST /v1/responses
Responses Basic Request
const response = await client.responses.create({
model: "grok-4",
input: "What is the difference between TCP and UDP?",
instructions: "You are a concise networking expert.",
max_output_tokens: 200,
temperature: 0.5,
});
console.log(client.responses.getOutputText(response));Parameters:
| Name | Type | Required | Description |
|------|------|----------|-------------|
| model | NexusifyModel | Yes | Model ID. |
| input | string \| Message[] | Yes | A string prompt or an array of conversation messages. |
| instructions | string | No | System-level instruction prepended to the conversation. |
| max_output_tokens | number | No | Maximum tokens to generate. |
| temperature | number | No | Sampling temperature. Default: 1.0. |
| top_p | number | No | Nucleus sampling. Default: 1.0. |
| stream | boolean | No | Enable SSE streaming. |
| tools | Tool[] | No | Tool definitions. |
| tool_choice | ToolChoice | No | Tool selection strategy. |
| store | boolean | No | Whether to store the response. Default: true. |
| metadata | Record<string, string> | No | Arbitrary key/value pairs returned with the response. |
| previous_response_id | string | No | ID of a prior response to continue from. |
| user | string | No | End-user identifier for auditing. |
Helper methods:
client.responses.getOutputText(response);
client.responses.getReasoningText(response);Reasoning Models
Models such as grok-4-thinking, grok-4.1-thinking, and grok-3-thinking include a chain-of-thought trace in the response.
const response = await client.responses.create({
model: "grok-4-thinking",
input: "Prove that the square root of 2 is irrational.",
});
const reasoning = client.responses.getReasoningText(response);
const answer = client.responses.getOutputText(response);
console.log("Reasoning:", reasoning);
console.log("Answer:", answer);Streaming Responses
When stream: true, the method returns an async generator of ResponseStreamEvent objects, each with an event name and a data payload following the OpenAI Responses API event lifecycle.
const stream = await client.responses.create({
model: "grok-4.1-fast",
input: "Write a haiku about winter.",
stream: true,
});
for await (const event of stream) {
if (event.event === "response.output_text.delta") {
const delta = event.data?.delta;
if (typeof delta === "string") process.stdout.write(delta);
}
if (event.event === "response.done") break;
}Streaming event reference:
| Event | Description |
|-------|-------------|
| response.created | Fired immediately before token generation begins. |
| response.in_progress | Generation has started. |
| response.output_item.added | A new output item has been opened. |
| response.content_part.added | A content part has been opened inside an output item. |
| response.reasoning_text.delta | A reasoning trace chunk (reasoning models only). |
| response.output_text.delta | A text response chunk. |
| response.output_text.done | Full text response has been delivered. |
| response.content_part.done | Content part is closed. |
| response.output_item.done | Output item has been fully delivered. |
| response.completed | Full response is ready. Contains the final response object with usage. |
| response.done | Terminal event — stream closes after this. |
Multi-turn Conversations
const response = await client.responses.create({
model: "gemini-2.5-flash",
instructions: "You are a helpful assistant.",
input: [
{ role: "user", content: "My name is Alex." },
{ role: "assistant", content: "Nice to meet you, Alex!" },
{ role: "user", content: "What is my name?" },
],
});
console.log(client.responses.getOutputText(response));Text Generate (Legacy)
A simpler, flat alternative to Chat Completions. Pass a plain string as prompt and receive a plain string completion. Supports all 84 models, streaming, conversation history, and all generation parameters.
Endpoint: POST /v1/text/generate
Use this for quick prototypes and simple bots. For production integrations and OpenAI SDK compatibility, prefer
/chat/completions.
Text Generate Basic Request
const result = await client.text.generate({
model: "deepseek-v3.2",
prompt: "Explain how neural networks learn.",
temperature: 0.7,
max_tokens: 200,
});
console.log(result.completion);
console.log(result.usage.total_tokens);Parameters:
| Name | Type | Required | Description |
|------|------|----------|-------------|
| model | NexusifyModel | Yes | Model ID. |
| prompt | string | * | Plain text prompt. Required if messages is not provided. |
| messages | Message[] | * | Full conversation history. Use instead of prompt for multi-turn. |
| systemInstruction | string | No | System-level instruction that defines model behavior. |
| temperature | number | No | Sampling temperature 0.0–2.0. Default: 0.7. |
| max_tokens | number | No | Maximum tokens to generate. Default: 300. |
| top_p | number | No | Nucleus sampling threshold. Default: 1.0. |
| stop | string \| string[] | No | Stop sequences. |
| stream | boolean | No | Stream response as SSE. Default: false. |
| userid | string | No | User ID for persistent conversation history. |
Streaming Text Generate
const stream = await client.text.generate({
model: "gpt-5-mini",
prompt: "Write a short story about a robot learning to paint.",
stream: true,
max_tokens: 400,
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}Persistent Conversation History
Pass a userid string to enable server-side memory. The API stores each exchange and automatically includes the history in subsequent requests with the same userid and model.
await client.text.generate({
model: "gpt-5-mini",
prompt: "My name is Alice.",
userid: "user_alice_001",
});
const second = await client.text.generate({
model: "gpt-5-mini",
prompt: "What is my name?",
userid: "user_alice_001",
});
console.log(second.completion);
await client.text.deleteHistory("user_alice_001");Image Generation
Generate images from text prompts using five state-of-the-art diffusion and synthesis models.
Endpoint: POST /v1/generate-image
Generated images are hosted temporarily and expire after 2 hours. Use the auto-download feature to save them locally.
Image Basic Request
const image = await client.images.generate({
prompt: "A cyberpunk city at night with neon lights, cinematic lighting",
model: "flux",
width: 1024,
height: 1024,
});
console.log(image.imageUrl);
console.log(image.size);
console.log(image.expiresIn);
console.log(image.user.usageRemaining);Parameters:
| Name | Type | Default | Description |
|------|------|---------|-------------|
| prompt | string | — | Text description of the image. Required. |
| model | NexusifyImageModel | "flux" | Image model ID. |
| width | number | 512 | Output width in pixels. Max 2048. |
| height | number | 512 | Output height in pixels. Max 2048. |
| saveTo | string | — | Directory path to save the image locally. |
| saveToRoot | boolean | — | If true, saves the image in the project root (process.cwd()). |
Auto-Download Images
The SDK can automatically download and save generated images without any extra setup.
Save to a specific folder:
const image = await client.images.generate({
prompt: "A minimalist logo, clean vector design, blue and white",
model: "klein",
width: 1024,
height: 1024,
saveTo: "./generated-images",
});
console.log("Remote URL:", image.imageUrl);
console.log("Local path:", image.localPath);Save to the project root:
const image = await client.images.generate({
prompt: "A photorealistic mountain at dawn",
model: "klein-large",
saveToRoot: true,
});
console.log("Saved to:", image.localPath);The local filename is generated automatically using a sanitized prompt slug and a timestamp, e.g.
nexusify-a-photorealistic-mountain-at-dawn-1735000000000.png.
Models
List and Filter
The models resource exposes methods to browse, search, and filter the full catalog of available models.
const all = client.models.listLocal();
console.log(`${all.data.length} models available`);
const model = client.models.findById("gemini-2.5-flash");
console.log(model?.provider, model?.pricing);
const visionModels = client.models.filterByCapability("vision");
const reasoningModels = client.models.filterByCapability("reasoning");
const codeModels = client.models.filterByCapability("code");
const fastModels = client.models.filterByCapability("fast");
const toolModels = client.models.filterByCapability("tools");
const xaiModels = client.models.filterByProvider("xAI");
const googleModels = client.models.filterByProvider("Google");
const mistralModels = client.models.filterByProvider("Mistral AI");
const imageModels = client.models.listImageModels();
console.log(imageModels);
const liveList = await client.models.list();Error Handling
All errors thrown by the SDK extend the base NexusifyError class, which exposes a status (HTTP code), code (string), and raw (original response body).
import {
NexusifyError,
AuthenticationError,
InsufficientCreditsError,
RateLimitError,
BadRequestError,
InternalServerError,
StreamingError,
TimeoutError,
} from "nexusify-sdk";
try {
const completion = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: "Hello" }],
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error("Invalid API key — get yours at https://dash.nexusify.co");
} else if (error instanceof InsufficientCreditsError) {
console.error("Not enough credits — top up at https://dash.nexusify.co");
} else if (error instanceof RateLimitError) {
console.error("Rate limit hit — max 130 requests/min");
} else if (error instanceof BadRequestError) {
console.error("Bad request:", error.message);
} else if (error instanceof InternalServerError) {
console.error("Server error — try again shortly");
} else if (error instanceof TimeoutError) {
console.error("Request timed out");
} else if (error instanceof NexusifyError) {
console.error(`[${error.status}] ${error.code}: ${error.message}`);
}
}Error classes:
| Class | HTTP Status | Code |
|-------|-------------|------|
| BadRequestError | 400 | bad_request |
| AuthenticationError | 401 | authentication_error |
| InsufficientCreditsError | 402 | insufficient_credits |
| RateLimitError | 429 | rate_limit_exceeded |
| InternalServerError | 500 | internal_server_error |
| StreamingError | — | streaming_error |
| TimeoutError | — | timeout_error |
Model List
All 84 text models available through the Nexusify API.
Capability tags: 🖼️ Vision · 🔧 Tools · 🧠 Reasoning · ⚡ Fast · 💻 Code
| Provider | Model ID | Capabilities | Input ($/1M) | Output ($/1M) |
|----------|----------|-------------|-------------|--------------|
| Meta AI | llama-4-maverick-17b-128e-instruct | — | $0.20 | $0.60 |
| Meta AI | llama-3.2-90b-vision-instruct | 🖼️ | $0.35 | $0.40 |
| Meta AI | llama-3.1-405b-instruct | — | $5.00 | $15.00 |
| Meta AI | llama-3.3-70b-instruct | — | $0.40 | $0.60 |
| Mistral AI | mistral-small-24b-instruct | 💻 | $0.10 | $0.30 |
| Mistral AI | mixtral-8x22b | — | $0.90 | $0.90 |
| Mistral AI | mistral-small-3.1 | 🖼️ 🔧 | $0.10 | $0.30 |
| Mistral AI | mistral-small-3.2 | 🔧 | $0.10 | $0.30 |
| Mistral AI | mistral-medium-3 | 🖼️ | $0.40 | $1.20 |
| Mistral AI | mistral-nemotron | 🔧 | $0.20 | $0.60 |
| Mistral AI | magistral-small | 🧠 | $0.50 | $1.50 |
| Mistral AI | devstral-2 | 💻 | $0.70 | $2.00 |
| Mistral AI | mamba-codestral | 💻 | $0.15 | $0.30 |
| Mistral AI | mistral-large-3 | 🖼️ 🔧 | $2.00 | $6.00 |
| Mistral AI | ministral-3-8b | 🖼️ | $0.10 | $0.10 |
| Mistral AI | ministral-3-14b | 🖼️ | $0.15 | $0.15 |
| Alibaba | qwen3-235b-a22b | — | $0.20 | $0.60 |
| Alibaba | qwen2.5-coder-32b | 💻 | $0.30 | $0.50 |
| Alibaba | qwen3-next-80b | 🧠 | $0.40 | $1.50 |
| Alibaba | qwen3.5 | 🧠 | $1.30 | $5.50 |
| Alibaba | qwen3-coder | 💻 | $0.85 | $4.00 |
| Alibaba | qwen3-vl | 🖼️ | $0.50 | $2.00 |
| Alibaba | qwen3-coder-next | 💻 | $0.33 | $0.85 |
| Google | gemini-2.5-flash | 🖼️ ⚡ | $0.15 | $0.60 |
| Google | gemini-2.5-flash-lite | ⚡ | $0.08 | $0.30 |
| Google | gemini-2.0-flash | — | $0.10 | $0.40 |
| Google | gemini-2.0-flash-lite | — | $0.08 | $0.30 |
| Google | gemma-7b | — | $0.07 | $0.07 |
| Google | gemma-2-9b | — | $0.07 | $0.07 |
| Google | gemini-3-flash-preview | 🖼️ 🔧 | $0.15 | $0.60 |
| OpenAI | gpt-4 | — | $30.00 | $60.00 |
| OpenAI | gpt-4o-mini | ⚡ | $0.15 | $0.60 |
| OpenAI | gpt-5-nano | ⚡ | $0.15 | $0.60 |
| OpenAI | gpt-5-mini | — | $0.40 | $1.60 |
| OpenAI | gpt-5 | — | $1.50 | $6.00 |
| OpenAI | gpt-5-codex | 💻 | $2.50 | $10.00 |
| OpenAI | gpt-5-codex-mini | 💻 | $0.60 | $2.50 |
| OpenAI | gpt-5.1 | — | $2.00 | $8.00 |
| OpenAI | gpt-5.1-codex | 💻 | $3.00 | $12.00 |
| OpenAI | gpt-5.1-codex-max | 💻 | $5.00 | $20.00 |
| OpenAI | gpt-5.1-codex-mini | 💻 | $0.80 | $3.00 |
| OpenAI | gpt-5.2 | — | $3.00 | $12.00 |
| OpenAI | gpt-5.2-codex | 💻 | $4.00 | $16.00 |
| OpenAI | gpt-5.3-codex | 💻 | $5.00 | $20.00 |
| OpenAI | gpt-5.4 | — | $6.00 | $24.00 |
| Open Source | gpt-oss-120b | — | $1.50 | $4.00 |
| Open Source | gpt-oss-20b | — | $0.30 | $0.80 |
| xAI | grok-3 | — | $3.00 | $15.00 |
| xAI | grok-3-mini | ⚡ | $0.30 | $0.50 |
| xAI | grok-3-thinking | 🧠 | $1.00 | $5.00 |
| xAI | grok-4 | 🧠 | $5.00 | $20.00 |
| xAI | grok-4-thinking | 🧠 | $3.00 | $15.00 |
| xAI | grok-4.1-mini | ⚡ | $0.40 | $1.20 |
| xAI | grok-4.1-fast | ⚡ | $1.00 | $3.00 |
| xAI | grok-4.1-expert | 🧠 | $3.00 | $12.00 |
| xAI | grok-4.1-thinking | 🧠 | $2.00 | $8.00 |
| xAI | grok-4.20-beta | 🧠 | $2.00 | $6.00 |
| DeepSeek | deepseek-v3.2 | 🧠 | $0.27 | $1.10 |
| DeepSeek | deepseek-v3.1 | 🧠 | $0.27 | $1.10 |
| Moonshot | kimi-k2 | 🔧 | $0.60 | $2.50 |
| Moonshot | kimi-k2.5 | 🖼️ 🧠 🔧 | $0.80 | $3.00 |
| MiniMax | minimax-m2 | 🔧 | $0.80 | $2.00 |
| MiniMax | minimax-m2.1 | — | $1.00 | $3.00 |
| MiniMax | minimax-m2.5 | — | $0.30 | $1.20 |
| NVIDIA | nemotron-3-nano | — | $0.05 | $0.10 |
| NVIDIA | nemotron-super-49b-v1.5 | — | $0.30 | $1.00 |
| Zhipu AI | glm-4.6 | — | $0.20 | $0.60 |
| Zhipu AI | glm-4.7 | — | $0.30 | $0.90 |
| Zhipu AI | glm5 | 🧠 | $3.00 | $9.00 |
| StepFun | step-3.5-flash | 🧠 | $0.10 | $0.30 |
| Cohere | command-a-3 | — | $2.50 | $10.00 |
| Cohere | command-a-vision | 🖼️ | $2.50 | $10.00 |
| Cohere | command-a-reasoning | 🧠 | $2.50 | $10.00 |
| Cohere | command-r | — | $0.15 | $0.60 |
| Cohere | command-r-plus | — | $2.50 | $10.00 |
| Cohere | command-r7b | ⚡ | $0.10 | $0.20 |
| Cohere | command-r7b-arabic | — | $0.10 | $0.20 |
| Cohere | aya-expanse-8b | — | $0.15 | $0.30 |
| Cohere | aya-expanse-32b | — | $0.40 | $1.20 |
| Cohere | aya-vision-8b | 🖼️ | $0.20 | $0.40 |
| Cohere | aya-vision-32b | 🖼️ | $0.50 | $1.50 |
| Phind | phind-90b | 💻 | $0.70 | $2.00 |
| Cogito AI | cogito-2.1 | 🧠 | $1.50 | $2.50 |
| RNJ AI | rnj-1 | — | $0.15 | $0.15 |
Image Model Pricing
| Model | Best For | 512×512 | 1024×1024 |
|-------|----------|---------|-----------|
| flux | General | $0.003 | $0.012 |
| zimage | General | $0.005 | $0.020 |
| klein | Art | $0.006 | $0.024 |
| klein-large | Pro / High Detail | $0.012 | $0.048 |
| gptimage | General | $0.040 | $0.160 |
Credits & Billing
Nexusify uses a credit-based system. All prices are in USD.
New accounts receive $25 in free credits every Monday, renewed automatically — no credit card required. Paid credits are available at any time from the dashboard and never expire; they are only drawn after free credits are exhausted.
When your credit balance is insufficient, the API returns a 402 status and the SDK throws an InsufficientCreditsError.
Rate Limits
The Nexusify API allows up to 130 requests per minute. Exceeding this limit returns a 429 status and the SDK throws a RateLimitError. Credits for failed requests are returned automatically.
Links
| Resource | URL |
|----------|-----|
| Website | nexusify.co |
| Dashboard | dash.nexusify.co/dashboard |
| Login | login.nexusify.co |
| Discord | discord.com/invite/4NXbPtYuHW |
| Docs | docs.nexusify.co |
| API Base URL | https://api.nexusify.co/v1 |
nexusify · v1.0.0 · MIT License
