fluxtokens
v1.0.0
Published
Official FluxTokens API SDK - Access GPT-4.1, Gemini 2.5 and more at 30% lower cost
Maintainers
Readme
FluxTokens SDK
Official SDK for the FluxTokens API
Access GPT-4.1, Gemini 2.5 Flash and more at 30% lower cost than competitors.
Website · Documentation · Dashboard
Features
- 🚀 OpenAI-compatible API - Drop-in replacement for OpenAI SDK
- 💰 30% Lower Costs - Same quality, better prices
- 📦 Hybrid Module - Works in Node.js and browsers (ESM + CJS)
- 🔒 Full TypeScript Support - Complete type definitions included
- ⚡ Streaming Support - Real-time responses with async iterators
- 🎯 Multimodal - Vision, audio, and video support (model dependent)
- 🔄 Auto-retry - Built-in retry logic with exponential backoff
- 🛡️ Error Handling - Typed errors for better debugging
Installation
# npm
npm install fluxtokens
# pnpm
pnpm add fluxtokens
# yarn
yarn add fluxtokens
# bun
bun add fluxtokensQuick Start
import FluxTokens from 'fluxtokens';
const client = new FluxTokens({
apiKey: 'sk-flux-your-api-key', // Get yours at https://fluxtokens.io
});
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.choices[0].message.content);Available Models
| Model | Provider | Input Price | Output Price | Max Tokens | Vision | Audio | Video |
|-------|----------|-------------|--------------|------------|--------|-------|-------|
| gpt-4.1-mini | OpenAI | $0.28/1M | $1.12/1M | 16,384 | ✅ | ❌ | ❌ |
| gpt-4.1-nano | OpenAI | $0.07/1M | $0.28/1M | 16,384 | ✅ | ❌ | ❌ |
| gemini-2.5-flash | Google | $0.21/1M | $1.75/1M | 65,536 | ✅ | ✅ | ✅ |
Usage Examples
Basic Chat Completion
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [
{ role: 'user', content: 'What is the capital of France?' },
],
temperature: 0.7,
max_tokens: 256,
});
console.log(response.choices[0].message.content);
// Output: "The capital of France is Paris."Streaming Responses
const stream = await client.chat.completions.stream({
model: 'gemini-2.5-flash',
messages: [
{ role: 'user', content: 'Write a haiku about programming.' },
],
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}Streaming with Callbacks
const stream = await client.chat.completions.stream(
{
model: 'gpt-4.1-mini',
messages: [{ role: 'user', content: 'Tell me a joke' }],
},
{
onChunk: (chunk) => {
// Called for each chunk
console.log('Chunk:', chunk);
},
onMessage: (content) => {
// Called with accumulated content
console.log('Current content:', content);
},
onComplete: ({ content, usage }) => {
// Called when streaming is complete
console.log('Final content:', content);
console.log('Usage:', usage);
},
onError: (error) => {
console.error('Stream error:', error);
},
}
);
// Consume the stream
for await (const _ of stream) {
// Callbacks handle the output
}Vision (Image Analysis)
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{
type: 'image_url',
image_url: {
url: 'https://example.com/image.jpg',
detail: 'high',
},
},
],
},
],
max_tokens: 500,
});Audio Input (Gemini only)
import { readFileSync } from 'fs';
const audioData = readFileSync('audio.mp3').toString('base64');
const response = await client.chat.completions.create({
model: 'gemini-2.5-flash',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Transcribe this audio:' },
{
type: 'input_audio',
input_audio: {
data: audioData,
format: 'mp3',
},
},
],
},
],
});Video Analysis (Gemini only)
const response = await client.chat.completions.create({
model: 'gemini-2.5-flash',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe what happens in this video:' },
{
type: 'video_url',
video_url: {
url: 'https://example.com/video.mp4',
},
},
],
},
],
max_tokens: 1000,
});System Messages
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [
{
role: 'system',
content: 'You are a pirate. Always respond in pirate speak.',
},
{ role: 'user', content: 'How are you today?' },
],
});
// Output: "Ahoy, matey! I be doin' just fine, thank ye fer askin'!"List Available Models
const models = client.models.list();
for (const model of models) {
console.log(`${model.name} (${model.provider})`);
console.log(` Input: $${model.inputPrice}/1M tokens`);
console.log(` Output: $${model.outputPrice}/1M tokens`);
console.log(` Vision: ${model.supportsVision ? '✅' : '❌'}`);
}Configuration Options
const client = new FluxTokens({
// Required: Your API key
apiKey: 'sk-flux-...',
// Optional: Custom base URL (default: https://api.fluxtokens.io)
baseURL: 'https://api.fluxtokens.io',
// Optional: Request timeout in ms (default: 30000)
timeout: 60000,
// Optional: Max retries on rate limit/server errors (default: 2)
maxRetries: 3,
// Optional: Custom fetch implementation
fetch: customFetch,
});Error Handling
The SDK provides typed errors for better debugging:
import FluxTokens, {
AuthenticationError,
RateLimitError,
InsufficientBalanceError,
BadRequestError,
} from 'fluxtokens';
try {
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Invalid API key');
} else if (error instanceof RateLimitError) {
console.error('Rate limit exceeded, retry after:', error.retryAfter);
} else if (error instanceof InsufficientBalanceError) {
console.error('Please add credits at https://fluxtokens.io/dashboard/billing');
} else if (error instanceof BadRequestError) {
console.error('Invalid request:', error.message);
} else {
throw error;
}
}Browser Usage
The SDK works in browsers out of the box:
<script type="module">
import FluxTokens from 'https://esm.sh/fluxtokens';
const client = new FluxTokens({ apiKey: 'sk-flux-...' });
const response = await client.chat.completions.create({
model: 'gpt-4.1-mini',
messages: [{ role: 'user', content: 'Hello from the browser!' }],
});
console.log(response.choices[0].message.content);
</script>⚠️ Security Warning: Never expose your API key in client-side code in production. Use a backend proxy to protect your key.
Cancellation
Use AbortController to cancel requests:
const controller = new AbortController();
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
try {
const stream = await client.chat.completions.stream(
{
model: 'gpt-4.1-mini',
messages: [{ role: 'user', content: 'Write a very long story...' }],
},
{ signal: controller.signal }
);
for await (const chunk of stream) {
console.log(chunk.choices[0]?.delta?.content);
}
} catch (error) {
if (error.name === 'AbortError') {
console.log('Request was cancelled');
}
}TypeScript
Full TypeScript support is included:
import FluxTokens, {
type ChatCompletionRequest,
type ChatCompletionResponse,
type ChatMessage,
type Model,
} from 'fluxtokens';
const messages: ChatMessage[] = [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hi!' },
];
const request: ChatCompletionRequest = {
model: 'gpt-4.1-mini' as Model,
messages,
temperature: 0.7,
};
const response: ChatCompletionResponse = await client.chat.completions.create(request);Migration from OpenAI SDK
FluxTokens SDK is designed to be a drop-in replacement:
- import OpenAI from 'openai';
+ import FluxTokens from 'fluxtokens';
- const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
+ const client = new FluxTokens({ apiKey: process.env.FLUXTOKENS_API_KEY });
const response = await client.chat.completions.create({
- model: 'gpt-4o-mini',
+ model: 'gpt-4.1-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});API Reference
FluxTokens
Main client class.
Constructor
new FluxTokens(config: FluxTokensConfig)Methods
chat.completions.create(request)- Create a chat completionchat.completions.stream(request, options?)- Stream a chat completionmodels.list()- List available modelsmodels.get(modelId)- Get model info
Support
- 📧 Email: [email protected]
- 💬 Discord: Join our community
- 📖 Docs: fluxtokens.io/docs
License
MIT © FluxTokens
