nodegpt.js
v1.1.0
Published
Advanced AI text and image generation for Node.js with powerful models: Gemini 2.5, Mistral 3.1, Nova Fast, OpenAI GPT-5 Nano, OpenAI GPT-4.1 Nano, and Qwen 2.5 Coder. No api Key needed, no limits, its totally free
Maintainers
Readme
nodegpt.js
A tiny, production-minded Node.js wrapper for fast image + text generation. Simple API:
generateImage(prompt, options)→ Data URL (or saved file).generateReply(prompt, options)→ text from a selection of Powerful models.generateReplyWithHistory(prompt, options)→ text with in-memory conversation history.
Built for devs who want powerful LLM/image primitives with almost-zero ceremony — drop in, call two functions, ship features.
Install
npm install nodegpt.jsQuick start
const fs = require('fs');
const {
generateImage,
generateReply,
generateReplyWithHistory,
getConversationHistory,
clearConversation,
clearAllHistory
} = require('nodegpt.js');
(async () => {
try {
// generate image and save automatically
const { dataUrl, saved } = await generateImage("A cyberpunk city at night", {
width: 800,
height: 600,
enhance: true,
nologo: true,
saveTo: 'cyberpunk.png' // optional: saves raw image bytes
});
console.log('Saved:', saved); // 'cyberpunk.png'
// generate text using a specific model
const reply = await generateReply("Explain quantum computing simply.", {
model: 'openai', // choose from the model list
timeout: 8000
});
console.log(reply);
// generate text with history persistence
const res1 = await generateReplyWithHistory("Hello, who are you?", {
conversationId: "chat1"
});
console.log(res1.text);
const res2 = await generateReplyWithHistory("Can you continue?", {
conversationId: "chat1"
});
console.log(res2.text);
// inspect stored history
console.log(getConversationHistory("chat1"));
// clear history if needed
clearConversation("chat1");
clearAllHistory();
} catch (err) {
console.error('AI error:', err.message);
}
})();API
generateImage(prompt, options)
Generates an image for prompt and returns a Data URL (or saves to disk).
Options
width(number) — default512height(number) — default512seed(number) — optional seed for reproducibilityenhance(boolean) — defaulttruetimeout(ms) — default10000(per-endpoint)saveTo(string) — optional filename to write image bytes (returns{ dataUrl, saved })
Returns
- If
saveToprovided:{ dataUrl, saved: '<filename>' } - Otherwise:
dataUrlstring
generateReply(prompt, options)
Generates text for prompt using one of the supported models.
Options
model(string) — which model to use (see list below). Default:'openai'seed(number) — optionaltimeout(ms) — default8000
Returns
string— generated text
generateReplyWithHistory(prompt, options)
Like generateReply, but persists conversation history in memory (per Node.js process).
Options
conversationId(string) — conversation key. If omitted, a random one is created.model(string) — default'openai'seed(number) — optionalcontextLimit(number) — maximum total characters stored per conversation (default:100000). If exceeded, the library will auto-start a new conversation and return{ newConversationStarted: true }.
Returns
{
conversationId: string,
text: string,
newConversationStarted?: boolean
}Conversation helpers
getConversationHistory(conversationId)→ returns array of{ prompt, response, ts }clearConversation(conversationId)→ deletes history for one conversationclearAllHistory()→ clears all conversations from memory
Models
These are the models supported — use their name in generateReply(..., { model: '<name>' }).
| name | short description | input → output |
| ------------- | ------------------------------------: | -------------- |
| gemini | Gemini 2.5 Flash Lite (api.navy) | text → text |
| mistral | Mistral Small 3.1 24B | text → text |
| nova-fast | Amazon Nova Micro | text → text |
| openai | OpenAI GPT-5 Nano | text → text |
| openai-fast | OpenAI GPT-4.1 Nano (fast) | text → text |
| qwen-coder | Qwen 2.5 Coder 32B (coder-focused) | text → text |
| bidara | BIDARA (NASA research assistant) | text→ text |
| midijourney | MIDIjourney (creative text2text tool) | text → text |
Best practices
- Wrap calls in
try/catch— network or DNS issues can still happen. - Use
timeoutoption for responsive UX. - For production high-availability, use
saveTofor images if you want to persist results server-side. - For multi-user apps, consider mapping your own user IDs →
conversationIds so history stays separate. - History is ephemeral (in-memory only). If the process restarts, history resets.
Error handling example
(async () => {
try {
const img = await generateImage("A blue whale made of glass", { timeout: 7000 });
// use img...
} catch (err) {
console.error('generateImage failed:', err.message);
// fallback logic or user-friendly message
}
})();Security & privacy
- The package hides target endpoints in its internal implementation (for a clean API surface). If you need full transparency about upstream hosts, ask and we’ll enable a verbose/debug mode.
- History is stored only in process memory, never persisted unless you explicitly save outputs via
saveTo.
Contributing
Am working on a social media app, email [email protected] to join me
License & Author
MIT Author: Ismail Gidado & Textmob
