@mvkproject/nexus
v6.3.3
Published
Free AI SDK with API key (500 free daily requests). Access 25+ LLM models (GPT-4, Gemini, Llama, DeepSeek), generate images with 14+ models (Flux, Stable Diffusion), and integrate Akinator game - all completely free.
Keywords
Readme
@mvkproject/nexus
Official JavaScript SDK for Nexus API - All AI, One API
Powerful and easy-to-use SDK for the Nexus API, supporting image generation and text generation with multiple AI models.
📋 Table of Contents
- Features
- Installation
- Getting Your API Key
- Quick Start
- API Reference
- Examples
- Rate Limits
- Error Handling
- TypeScript Support
- Support & Community
- License
✨ Features
- 🎨 Image Generation - Generate images with 14+ AI models (Flux, Stable Diffusion, etc.)
- 🤖 Text Generation - Access 25+ LLM models (Gemini, GPT-4, Llama, Qwen, DeepSeek, Mistral, etc.)
- 🌊 Real-time Streaming - ChatGPT-like streaming responses for better UX
- 📝 Conversation History - Automatic context with
userid(prompt format) or manual withmessagesarray - 👁️ Image Vision - Analyze images with Gemini and Llama vision models
- 🔄 OpenAI Compatible - Works with OpenAI format for easy migration
- 📦 TypeScript - Full TypeScript support with type definitions
- ⚡ ESM & CommonJS - Works with both
importandrequire - 🛡️ Error Handling - Comprehensive error handling with meaningful messages
- 🔧 Axios-based - Reliable HTTP client with automatic retries
📦 Installation
npm install @mvkproject/nexusor
yarn add @mvkproject/nexus🔑 Getting Your API Key
Getting your free API key is simple:
- Visit Nexus
- Click "Try Now For Free" and sign in (Discord or Google recommended)
- Return to the dashboard by clicking "Try Now For Free" again
- Scroll down to find the "Your API Key" section - that's your key!
Free Plan includes:
- ✅ 500 requests per day
- ✅ All 17 image generation models
- ✅ All 25+ AI text models
- ✅ Image vision support (Gemini & Llama)
- ✅ Conversation history
- ✅ Full feature access
🚀 Quick Start
ESM (Import)
import NexusClient from '@mvkproject/nexus';
const client = new NexusClient({ apiKey: 'YOUR_API_KEY' });
// Generate an image
const image = await client.image.generate({
prompt: 'A futuristic city at sunset',
model: 'flux',
width: 1024,
height: 768
});
console.log('Image URL:', image.imageUrl);CommonJS (Require)
const { NexusClient } = require('@mvkproject/nexus');
const client = new NexusClient({ apiKey: 'YOUR_API_KEY' });
// Generate text
client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'Explain quantum computing in simple terms',
temperature: 0.7
}).then(response => {
console.log(response.completion);
});TypeScript
import NexusClient, { ImageGenerationOptions, TextGenerationOptions } from '@mvkproject/nexus';
const client = new NexusClient({ apiKey: process.env.NEXUS_API_KEY! });
const options: ImageGenerationOptions = {
prompt: 'A beautiful landscape',
model: 'flux',
width: 1024,
height: 768
};
const result = await client.image.generate(options);📚 API Reference
Image Generation
Generate stunning images using 14 different AI models.
Generate Image
const result = await client.image.generate({
prompt: 'A beautiful sunset over mountains',
model: 'flux', // Optional, default: 'flux'
width: 1024, // Optional, default: 512
height: 768, // Optional, default: 512
download: false, // Optional, download image to local disk
downloadPath: './images' // Optional, path to save downloaded images
});
console.log(result.imageUrl); // Full URL to generated image
console.log(result.expiresIn); // Expiration time
console.log(result.model); // Model used
console.log(result.size); // Image dimensions
console.log(result.downloadedPath); // Path to downloaded file (if download: true)Note: The SDK automatically adds the base URL (https://nexus.drexus.xyz) to image paths returned by the API, so imageUrl will be a complete URL ready to use.
Download Feature:
- Set
download: trueto automatically download the generated image to your local disk - Use
downloadPathto specify where to save the image (default:./downloads) - The downloaded file path will be available in
result.downloadedPath
Example with download:
const result = await client.image.generate({
prompt: 'A futuristic city',
model: 'flux',
download: true,
downloadPath: './my-images'
});
console.log('Image URL:', result.imageUrl);
console.log('Downloaded to:', result.downloadedPath);Available Image Models
| Model | Description | Best For |
|-------|-------------|----------|
| flux | High-quality general purpose | Realistic images |
| flux-realism | Photo-realistic generation | Photography style |
| flux-anime | Anime-style images | Anime characters & art |
| flux-3d | 3D rendered style | 3D visualization |
| flux-pro | Professional quality | High-end results |
| any-dark | Dark mode optimized | Dark themes |
| turbo | Fast generation | Quick prototyping |
| pimp-diffusion | Stylized generation | Artistic effects |
| magister-diffusion | Master-level quality | Professional art |
| dolly-mini | Lightweight model | Low-resource environments |
| stable-diffusion | Classic SD model | General purpose |
| stable-diffusion-animation | Animation frames | Animation sequences |
| photo3d | 3D photo-like | 3D-like photos |
| willit | Experimental | Creative experiments |
Text Generation
Access 25+ advanced AI models for text generation with conversation history and streaming support.
Generate Text
Simple Prompt Format:
const response = await client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'Explain quantum computing in simple terms',
temperature: 0.7, // Optional: 0-2, default 1.0
maxOutputTokens: 1024 // Optional: default 8192
});
console.log(response.completion);With Conversation History:
const response = await client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'Explain quantum computing',
userid: 'user123', // Enables automatic conversation history
systemInstruction: 'You are a helpful teacher', // Optional: control AI behavior
temperature: 0.7,
maxOutputTokens: 1024
});
console.log(response.completion);
console.log(response.historyLength); // Number of messages stored in historyOpenAI Compatible Format
The SDK supports OpenAI-style messages format for structured conversations. Unlike the simple prompt format, the messages array requires you to manually manage conversation history by including previous messages.
Single Exchange (No History):
const response = await client.text.generate({
model: 'llama-3.3-70b-instruct',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Write a short story about a robot.' }
],
max_tokens: 512, // OpenAI-style parameter
temperature: 0.8,
top_p: 0.9 // Nucleus sampling
});
console.log(response.completion);Multi-turn Conversation (Manual History):
// First exchange
const response1 = await client.text.generate({
model: 'llama-3.3-70b-instruct',
messages: [
{ role: 'system', content: 'You are a helpful and technical assistant.' },
{ role: 'user', content: 'Hello, what is a CSV?' }
],
temperature: 0.7
});
// Second exchange - manually include previous messages
const response2 = await client.text.generate({
model: 'llama-3.3-70b-instruct',
messages: [
{ role: 'system', content: 'You are a helpful and technical assistant.' },
{ role: 'user', content: 'Hello, what is a CSV?' },
{ role: 'assistant', content: response1.completion }, // Previous AI response
{ role: 'user', content: 'And how do I open it?' } // New question
],
temperature: 0.7
});
// Response will reference CSV because you included the conversation historyImportant: The messages format does NOT use the userid parameter. You must manually include previous user and assistant messages in the array to maintain context.
Why use messages format?
- ✅ Explicit Control: Full control over conversation history
- ✅ System Instructions: Separate system prompts from conversation
- ✅ OpenAI Compatible: Drop-in replacement for OpenAI API
- ✅ Multi-role Support: Clear separation of system, user, and assistant messages
Stream Text (Real-time)
await client.text.generateStream(
{
model: 'gemini-2.5-flash',
prompt: 'Write a story about AI',
temperature: 0.8
},
(chunk) => {
process.stdout.write(chunk); // Print each chunk as it arrives
}
);Available AI Models
New Models ⭐
llama-4-maverick-17b-128e-instruct- Latest Llama 4 instruction-tuned modelllama-3.2-90b-vision-instruct- Vision model with image support (base64, URL, file upload)llama-3.1-405b-instruct- Largest, most powerful Llama 3.1 modelmistral-small-24b-instruct- Efficient Mistral modelqwen3-235b-a22b- Latest large-scale Qwen modelgpt-oss-120b- Large open-source modelgpt-oss-20b- Smaller open-source model
Google Gemini Models
gemini-2.5-flash- Latest fast model ⭐gemini-2.5-flash-lite- Lightweight latestgemini-2.5-pro- Most capable ⭐gemini-2.0-flash- Fast and efficientgemini-2.0-flash-lite- Lightweight versiongemini-2.0-flash-exp- Experimentalgemini-2.0-flash-thinking-exp- With reasoninggemini-exp-1206- Experimental advancedgemini-pro- Original pro model
OpenAI Models
gpt-4- Advanced reasoning
Meta AI Models
llama-3.3-70b-instruct- Meta Llama 3.3
Google Gemma Models
gemma-7b- Lightweight 7Bgemma-2-9b- 9B instruction-tuned
Alibaba Cloud Models
qwen2.5-coder-32b- Specialized for coding
Mistral AI Models
mixtral-8x22b- Mixture-of-experts
DeepSeek Models
deepseek-r1- Advanced reasoningdeepseek-v3.1- With thinking mode
System Instructions
Control the AI's behavior, tone, and output format:
const response = await client.text.generate({
model: 'gemini-2.5-pro',
prompt: 'How do I center a div?',
systemInstruction: 'You are a senior web developer. Always provide modern CSS solutions with code examples.'
});Use cases:
- 👤 Role Setting: "You are an expert programmer"
- 🎵 Tone Control: "Always respond in a friendly tone"
- 📄 Output Format: "Provide code examples with explanations"
- 🚫 Constraints: "Keep responses under 200 words"
Image Vision
Analyze images with Gemini and Llama vision models. Three input methods supported:
1. Image URL (string or array):
const response = await client.text.generate({
model: 'llama-3.2-90b-vision-instruct',
prompt: "What's in this image?",
images: 'https://example.com/photo.jpg'
});
// Multiple images
const response = await client.text.generate({
model: 'gemini-2.5-pro',
prompt: 'Compare these images',
images: [
'https://example.com/image1.jpg',
'https://example.com/image2.jpg'
]
});2. Base64 encoded image:
const response = await client.text.generate({
model: 'llama-3.2-90b-vision-instruct',
prompt: 'Describe this image',
images: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...'
});
// Or with object format
const response = await client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'Analyze this',
images: {
data: 'iVBORw0KGgoAAAANSUhEUgAA...',
mimeType: 'image/png'
}
});Vision Models:
llama-3.2-90b-vision-instruct- Meta's vision modelgemini-2.5-flash- Google's vision-capable modelgemini-2.5-pro- Google's most capable vision model
Supported formats: JPEG, PNG, WEBP, HEIC, HEIF
Automatic Conversation History (Prompt Format Only)
The Nexus API automatically manages conversation history when you use the prompt format with userid. This is the simplest way to build multi-turn conversations.
How it works:
- 📝 The API stores the last 10 messages per user automatically
- 🔄 History persists across API calls with the same
userid - 🧹 History is managed server-side - no cleanup needed
- ⚠️ Only works with
promptformat (not withmessagesarray)
Example - Automatic History:
// First conversation
const response1 = await client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'My name is Sarah and I love hiking.',
userid: 'sarah-123', // Enable automatic history tracking
temperature: 0.7
});
// Later conversation - API automatically remembers the context
const response2 = await client.text.generate({
model: 'gemini-2.5-flash',
prompt: 'What activities do I enjoy?',
userid: 'sarah-123', // Same userid = automatic context
temperature: 0.7
});
// Response: "You mentioned that you love hiking!"Comparison: Automatic vs Manual History
| Feature | Prompt + userid (Automatic) | Messages (Manual) |
|---------|---------------------------|-------------------|
| History Management | ✅ Automatic | ❌ Manual (you include previous messages) |
| userid parameter | ✅ Required | ❌ Not used |
| Previous messages | ✅ Stored by API | ❌ You must include them |
| Use case | Simple conversations | OpenAI compatibility, explicit control |
Best Practices:
- ✅ Use unique
useridper user/conversation thread - ✅ Keep conversations focused on a single topic per
userid - ✅ For different topics, use different
useridvalues - ✅ For OpenAI compatibility or explicit control, use
messagesformat instead
💡 Examples
Check out the examples/ directory for complete working examples:
- demo.js - Comprehensive SDK test suite
- quick-test.js - Quick verification test
Run examples:
export NEXUS_API_KEY="your-api-key-here"
npm run demo⚡ Rate Limits
Free Plan
- 500 requests per day
- All 14 image models
- All 25+ AI text models
- Up to 2048x2048 image resolution
- Full feature access
- Image vision support (Gemini & Llama)
- Conversation history
🛡️ Error Handling
The SDK provides comprehensive error handling:
try {
const result = await client.image.generate({
prompt: 'A beautiful landscape'
});
} catch (error) {
console.error('Error:', error.message);
// Possible errors:
// - "Unauthorized: Invalid or missing API key" (401)
// - "Too Many Requests: Daily limit exceeded" (429)
// - "Bad Request: Missing prompt or invalid parameters" (400)
// - "Server Error: Image generation failed" (500)
// - "Network Error: Unable to reach the Nexus API"
}Error Codes:
| Code | Description | |------|-------------| | 400 | Bad Request - Invalid parameters | | 401 | Unauthorized - Invalid API key | | 403 | Forbidden - Access denied | | 404 | Not Found - Resource doesn't exist | | 429 | Too Many Requests - Rate limit exceeded | | 500 | Server Error - Internal server error |
🔷 TypeScript Support
Full TypeScript support with comprehensive type definitions:
import NexusClient, {
NexusClientOptions,
ImageGenerationOptions,
ImageGenerationResponse,
TextGenerationOptions,
TextGenerationResponse,
AkinatorStartOptions,
AkinatorStartResponse,
AkinatorAnswerResponse
} from '@mvkproject/nexus';
const options: NexusClientOptions = {
apiKey: process.env.NEXUS_API_KEY!,
baseURL: 'https://nexus.drexus.xyz' // Optional
};
const client = new NexusClient(options);All methods are fully typed with IntelliSense support.
💬 Support & Community
Need help or want to connect with other developers?
- 💬 Discord Server: Join our community
- 📧 Email Support: [email protected]
📄 License
MIT License - see the LICENSE file for details.
🔗 Links
- Homepage: https://nexus.drexus.xyz
- Documentation: https://nexus.drexus.xyz/utilities/api-docs-index
- Playground: https://nexus.drexus.xyz/utilities/playground
- npm Package: @mvkproject/nexus
Made with ❤️ by MVK Project
All AI, One API
