npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mvkproject/nexus

v6.3.3

Published

Free AI SDK with API key (500 free daily requests). Access 25+ LLM models (GPT-4, Gemini, Llama, DeepSeek), generate images with 14+ models (Flux, Stable Diffusion), and integrate Akinator game - all completely free.

Readme

@mvkproject/nexus

Official JavaScript SDK for Nexus API - All AI, One API

npm version License: MIT

Powerful and easy-to-use SDK for the Nexus API, supporting image generation and text generation with multiple AI models.


📋 Table of Contents


✨ Features

  • 🎨 Image Generation - Generate images with 14+ AI models (Flux, Stable Diffusion, etc.)
  • 🤖 Text Generation - Access 25+ LLM models (Gemini, GPT-4, Llama, Qwen, DeepSeek, Mistral, etc.)
  • 🌊 Real-time Streaming - ChatGPT-like streaming responses for better UX
  • 📝 Conversation History - Automatic context with userid (prompt format) or manual with messages array
  • 👁️ Image Vision - Analyze images with Gemini and Llama vision models
  • 🔄 OpenAI Compatible - Works with OpenAI format for easy migration
  • 📦 TypeScript - Full TypeScript support with type definitions
  • ESM & CommonJS - Works with both import and require
  • 🛡️ Error Handling - Comprehensive error handling with meaningful messages
  • 🔧 Axios-based - Reliable HTTP client with automatic retries

📦 Installation

npm install @mvkproject/nexus

or

yarn add @mvkproject/nexus

🔑 Getting Your API Key

Getting your free API key is simple:

  1. Visit Nexus
  2. Click "Try Now For Free" and sign in (Discord or Google recommended)
  3. Return to the dashboard by clicking "Try Now For Free" again
  4. Scroll down to find the "Your API Key" section - that's your key!

Free Plan includes:

  • ✅ 500 requests per day
  • ✅ All 17 image generation models
  • ✅ All 25+ AI text models
  • ✅ Image vision support (Gemini & Llama)
  • ✅ Conversation history
  • ✅ Full feature access

🚀 Quick Start

ESM (Import)

import NexusClient from '@mvkproject/nexus';

const client = new NexusClient({ apiKey: 'YOUR_API_KEY' });

// Generate an image
const image = await client.image.generate({
  prompt: 'A futuristic city at sunset',
  model: 'flux',
  width: 1024,
  height: 768
});

console.log('Image URL:', image.imageUrl);

CommonJS (Require)

const { NexusClient } = require('@mvkproject/nexus');

const client = new NexusClient({ apiKey: 'YOUR_API_KEY' });

// Generate text
client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'Explain quantum computing in simple terms',
  temperature: 0.7
}).then(response => {
  console.log(response.completion);
});

TypeScript

import NexusClient, { ImageGenerationOptions, TextGenerationOptions } from '@mvkproject/nexus';

const client = new NexusClient({ apiKey: process.env.NEXUS_API_KEY! });

const options: ImageGenerationOptions = {
  prompt: 'A beautiful landscape',
  model: 'flux',
  width: 1024,
  height: 768
};

const result = await client.image.generate(options);

📚 API Reference

Image Generation

Generate stunning images using 14 different AI models.

Generate Image

const result = await client.image.generate({
  prompt: 'A beautiful sunset over mountains',
  model: 'flux',           // Optional, default: 'flux'
  width: 1024,             // Optional, default: 512
  height: 768,             // Optional, default: 512
  download: false,         // Optional, download image to local disk
  downloadPath: './images' // Optional, path to save downloaded images
});

console.log(result.imageUrl);     // Full URL to generated image
console.log(result.expiresIn);    // Expiration time
console.log(result.model);        // Model used
console.log(result.size);         // Image dimensions
console.log(result.downloadedPath); // Path to downloaded file (if download: true)

Note: The SDK automatically adds the base URL (https://nexus.drexus.xyz) to image paths returned by the API, so imageUrl will be a complete URL ready to use.

Download Feature:

  • Set download: true to automatically download the generated image to your local disk
  • Use downloadPath to specify where to save the image (default: ./downloads)
  • The downloaded file path will be available in result.downloadedPath

Example with download:

const result = await client.image.generate({
  prompt: 'A futuristic city',
  model: 'flux',
  download: true,
  downloadPath: './my-images'
});

console.log('Image URL:', result.imageUrl);
console.log('Downloaded to:', result.downloadedPath);

Available Image Models

| Model | Description | Best For | |-------|-------------|----------| | flux | High-quality general purpose | Realistic images | | flux-realism | Photo-realistic generation | Photography style | | flux-anime | Anime-style images | Anime characters & art | | flux-3d | 3D rendered style | 3D visualization | | flux-pro | Professional quality | High-end results | | any-dark | Dark mode optimized | Dark themes | | turbo | Fast generation | Quick prototyping | | pimp-diffusion | Stylized generation | Artistic effects | | magister-diffusion | Master-level quality | Professional art | | dolly-mini | Lightweight model | Low-resource environments | | stable-diffusion | Classic SD model | General purpose | | stable-diffusion-animation | Animation frames | Animation sequences | | photo3d | 3D photo-like | 3D-like photos | | willit | Experimental | Creative experiments |


Text Generation

Access 25+ advanced AI models for text generation with conversation history and streaming support.

Generate Text

Simple Prompt Format:

const response = await client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'Explain quantum computing in simple terms',
  temperature: 0.7,                               // Optional: 0-2, default 1.0
  maxOutputTokens: 1024                           // Optional: default 8192
});

console.log(response.completion);

With Conversation History:

const response = await client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'Explain quantum computing',
  userid: 'user123',                              // Enables automatic conversation history
  systemInstruction: 'You are a helpful teacher', // Optional: control AI behavior
  temperature: 0.7,
  maxOutputTokens: 1024
});

console.log(response.completion);
console.log(response.historyLength);  // Number of messages stored in history

OpenAI Compatible Format

The SDK supports OpenAI-style messages format for structured conversations. Unlike the simple prompt format, the messages array requires you to manually manage conversation history by including previous messages.

Single Exchange (No History):

const response = await client.text.generate({
  model: 'llama-3.3-70b-instruct',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Write a short story about a robot.' }
  ],
  max_tokens: 512,      // OpenAI-style parameter
  temperature: 0.8,
  top_p: 0.9            // Nucleus sampling
});

console.log(response.completion);

Multi-turn Conversation (Manual History):

// First exchange
const response1 = await client.text.generate({
  model: 'llama-3.3-70b-instruct',
  messages: [
    { role: 'system', content: 'You are a helpful and technical assistant.' },
    { role: 'user', content: 'Hello, what is a CSV?' }
  ],
  temperature: 0.7
});

// Second exchange - manually include previous messages
const response2 = await client.text.generate({
  model: 'llama-3.3-70b-instruct',
  messages: [
    { role: 'system', content: 'You are a helpful and technical assistant.' },
    { role: 'user', content: 'Hello, what is a CSV?' },
    { role: 'assistant', content: response1.completion },  // Previous AI response
    { role: 'user', content: 'And how do I open it?' }      // New question
  ],
  temperature: 0.7
});
// Response will reference CSV because you included the conversation history

Important: The messages format does NOT use the userid parameter. You must manually include previous user and assistant messages in the array to maintain context.

Why use messages format?

  • Explicit Control: Full control over conversation history
  • System Instructions: Separate system prompts from conversation
  • OpenAI Compatible: Drop-in replacement for OpenAI API
  • Multi-role Support: Clear separation of system, user, and assistant messages

Stream Text (Real-time)

await client.text.generateStream(
  {
    model: 'gemini-2.5-flash',
    prompt: 'Write a story about AI',
    temperature: 0.8
  },
  (chunk) => {
    process.stdout.write(chunk);  // Print each chunk as it arrives
  }
);

Available AI Models

New Models ⭐
  • llama-4-maverick-17b-128e-instruct - Latest Llama 4 instruction-tuned model
  • llama-3.2-90b-vision-instruct - Vision model with image support (base64, URL, file upload)
  • llama-3.1-405b-instruct - Largest, most powerful Llama 3.1 model
  • mistral-small-24b-instruct - Efficient Mistral model
  • qwen3-235b-a22b - Latest large-scale Qwen model
  • gpt-oss-120b - Large open-source model
  • gpt-oss-20b - Smaller open-source model
Google Gemini Models
  • gemini-2.5-flash - Latest fast model ⭐
  • gemini-2.5-flash-lite - Lightweight latest
  • gemini-2.5-pro - Most capable ⭐
  • gemini-2.0-flash - Fast and efficient
  • gemini-2.0-flash-lite - Lightweight version
  • gemini-2.0-flash-exp - Experimental
  • gemini-2.0-flash-thinking-exp - With reasoning
  • gemini-exp-1206 - Experimental advanced
  • gemini-pro - Original pro model
OpenAI Models
  • gpt-4 - Advanced reasoning
Meta AI Models
  • llama-3.3-70b-instruct - Meta Llama 3.3
Google Gemma Models
  • gemma-7b - Lightweight 7B
  • gemma-2-9b - 9B instruction-tuned
Alibaba Cloud Models
  • qwen2.5-coder-32b - Specialized for coding
Mistral AI Models
  • mixtral-8x22b - Mixture-of-experts
DeepSeek Models
  • deepseek-r1 - Advanced reasoning
  • deepseek-v3.1 - With thinking mode

System Instructions

Control the AI's behavior, tone, and output format:

const response = await client.text.generate({
  model: 'gemini-2.5-pro',
  prompt: 'How do I center a div?',
  systemInstruction: 'You are a senior web developer. Always provide modern CSS solutions with code examples.'
});

Use cases:

  • 👤 Role Setting: "You are an expert programmer"
  • 🎵 Tone Control: "Always respond in a friendly tone"
  • 📄 Output Format: "Provide code examples with explanations"
  • 🚫 Constraints: "Keep responses under 200 words"

Image Vision

Analyze images with Gemini and Llama vision models. Three input methods supported:

1. Image URL (string or array):

const response = await client.text.generate({
  model: 'llama-3.2-90b-vision-instruct',
  prompt: "What's in this image?",
  images: 'https://example.com/photo.jpg'
});

// Multiple images
const response = await client.text.generate({
  model: 'gemini-2.5-pro',
  prompt: 'Compare these images',
  images: [
    'https://example.com/image1.jpg',
    'https://example.com/image2.jpg'
  ]
});

2. Base64 encoded image:

const response = await client.text.generate({
  model: 'llama-3.2-90b-vision-instruct',
  prompt: 'Describe this image',
  images: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...'
});

// Or with object format
const response = await client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'Analyze this',
  images: {
    data: 'iVBORw0KGgoAAAANSUhEUgAA...',
    mimeType: 'image/png'
  }
});

Vision Models:

  • llama-3.2-90b-vision-instruct - Meta's vision model
  • gemini-2.5-flash - Google's vision-capable model
  • gemini-2.5-pro - Google's most capable vision model

Supported formats: JPEG, PNG, WEBP, HEIC, HEIF

Automatic Conversation History (Prompt Format Only)

The Nexus API automatically manages conversation history when you use the prompt format with userid. This is the simplest way to build multi-turn conversations.

How it works:

  • 📝 The API stores the last 10 messages per user automatically
  • 🔄 History persists across API calls with the same userid
  • 🧹 History is managed server-side - no cleanup needed
  • ⚠️ Only works with prompt format (not with messages array)

Example - Automatic History:

// First conversation
const response1 = await client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'My name is Sarah and I love hiking.',
  userid: 'sarah-123',  // Enable automatic history tracking
  temperature: 0.7
});

// Later conversation - API automatically remembers the context
const response2 = await client.text.generate({
  model: 'gemini-2.5-flash',
  prompt: 'What activities do I enjoy?',
  userid: 'sarah-123',  // Same userid = automatic context
  temperature: 0.7
});
// Response: "You mentioned that you love hiking!"

Comparison: Automatic vs Manual History

| Feature | Prompt + userid (Automatic) | Messages (Manual) | |---------|---------------------------|-------------------| | History Management | ✅ Automatic | ❌ Manual (you include previous messages) | | userid parameter | ✅ Required | ❌ Not used | | Previous messages | ✅ Stored by API | ❌ You must include them | | Use case | Simple conversations | OpenAI compatibility, explicit control |

Best Practices:

  • ✅ Use unique userid per user/conversation thread
  • ✅ Keep conversations focused on a single topic per userid
  • ✅ For different topics, use different userid values
  • ✅ For OpenAI compatibility or explicit control, use messages format instead

💡 Examples

Check out the examples/ directory for complete working examples:

Run examples:

export NEXUS_API_KEY="your-api-key-here"
npm run demo

⚡ Rate Limits

Free Plan

  • 500 requests per day
  • All 14 image models
  • All 25+ AI text models
  • Up to 2048x2048 image resolution
  • Full feature access
  • Image vision support (Gemini & Llama)
  • Conversation history

🛡️ Error Handling

The SDK provides comprehensive error handling:

try {
  const result = await client.image.generate({
    prompt: 'A beautiful landscape'
  });
} catch (error) {
  console.error('Error:', error.message);
  
  // Possible errors:
  // - "Unauthorized: Invalid or missing API key" (401)
  // - "Too Many Requests: Daily limit exceeded" (429)
  // - "Bad Request: Missing prompt or invalid parameters" (400)
  // - "Server Error: Image generation failed" (500)
  // - "Network Error: Unable to reach the Nexus API"
}

Error Codes:

| Code | Description | |------|-------------| | 400 | Bad Request - Invalid parameters | | 401 | Unauthorized - Invalid API key | | 403 | Forbidden - Access denied | | 404 | Not Found - Resource doesn't exist | | 429 | Too Many Requests - Rate limit exceeded | | 500 | Server Error - Internal server error |


🔷 TypeScript Support

Full TypeScript support with comprehensive type definitions:

import NexusClient, {
  NexusClientOptions,
  ImageGenerationOptions,
  ImageGenerationResponse,
  TextGenerationOptions,
  TextGenerationResponse,
  AkinatorStartOptions,
  AkinatorStartResponse,
  AkinatorAnswerResponse
} from '@mvkproject/nexus';

const options: NexusClientOptions = {
  apiKey: process.env.NEXUS_API_KEY!,
  baseURL: 'https://nexus.drexus.xyz'  // Optional
};

const client = new NexusClient(options);

All methods are fully typed with IntelliSense support.


💬 Support & Community

Need help or want to connect with other developers?


📄 License

MIT License - see the LICENSE file for details.


🔗 Links


Made with ❤️ by MVK Project

All AI, One API