blackboxify
v1.0.0
Published
BlackboxAI API client with OpenAI-compatible interface, streaming support, multiple auth accounts, and token pricing
Maintainers
Readme
Blackboxify
A powerful Node.js client for BlackboxAI with an OpenAI-compatible interface, streaming support, multiple auth accounts, and token pricing.
Features
- 🔄 OpenAI-Compatible Interface: Drop-in replacement for OpenAI's chat completion API
- 🌊 Streaming Support: Real-time streaming responses with token counting
- 🔑 Multiple Auth Accounts: Automatic retry with multiple accounts on rate limits
- 💰 Token Pricing: Accurate token counting and cost estimation
- 🚀 High Performance: Optimized for speed and reliability
- 🛡️ Error Handling: Comprehensive error handling and rate limit detection
Installation
npm install blackboxifyQuick Start
import { BlackboxAI } from 'blackboxify';
const client = new BlackboxAI({
models: {
"blackboxai-default": {
id: "blackboxai-default",
input: 0,
output: 0
}
},
auth: [
{
email: "[email protected]",
customer_id: "your_customer_id"
}
]
});
// Basic chat completion
const response = await client.chat.completions.create({
model: "blackboxai-default",
messages: [
{ role: "user", content: "Hello, how are you?" }
],
max_tokens: 50,
temperature: 0.7
});
console.log(response.choices[0].message.content);Streaming Example
const stream = await client.chat.completions.create({
model: "blackboxai-default",
messages: [
{ role: "user", content: "Tell me a story." }
],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0].delta.content);
}Configuration
Client Options
const client = new BlackboxAI({
models: {
// Configure available models with pricing
"model-name": {
id: "model-identifier",
input: 2.5, // Cost per 1M input tokens
output: 10 // Cost per 1M output tokens
}
},
auth: [
// Multiple auth accounts for automatic retry
{
email: "[email protected]",
customer_id: "cus_account1"
},
{
email: "[email protected]",
customer_id: "cus_account2"
}
]
});Request Options
const response = await client.chat.completions.create({
model: "model-name", // Model identifier
messages: [], // Array of message objects
max_tokens: 4096, // Maximum tokens in response
temperature: 0.7, // Response randomness (0-1)
stream: false, // Enable streaming mode
user: "user-identifier" // Optional user identifier
});Response Format
Regular Response
{
id: "chatcmpl-123",
object: "chat.completion",
created: 1677858242,
model: "model-name",
choices: [{
index: 0,
message: {
role: "assistant",
content: "Response content here"
},
finish_reason: "stop"
}],
usage: {
prompt_tokens: 10,
completion_tokens: 20,
total_tokens: 30,
cost: 0.0004
}
}Streaming Response
{
id: "chatcmpl-123",
object: "chat.completion.chunk",
created: 1677858242,
model: "model-name",
choices: [{
index: 0,
delta: {
content: "Chunk content here"
},
finish_reason: null
}],
usage: {
prompt_tokens: 10,
completion_tokens: 5,
total_tokens: 15,
cost: 0.0002
}
}Error Handling
try {
const response = await client.chat.completions.create({
model: "model-name",
messages: [{ role: "user", content: "Hello" }]
});
} catch (error) {
if (error.statusCode === 429) {
console.log("Rate limit reached, retrying with next account...");
} else {
console.error("Request failed:", error.message);
}
}Token Calculation
The client automatically calculates tokens for both input and output:
- Input tokens: Calculated from the messages array
- Output tokens: Calculated from the response content
- Cost: Based on model-specific pricing per 1M tokens
Best Practices
- Multiple Auth Accounts: Configure multiple accounts for better reliability
- Error Handling: Always implement proper error handling
- Streaming: Use streaming for real-time responses and better UX
- Token Monitoring: Monitor token usage and costs
- Rate Limits: Handle rate limits gracefully
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
