@paxsenix/ai
v0.1.5
Published
A lightweight and intuitive Node.js client for the Paxsenix AI API.
Maintainers
Readme
@paxsenix/ai
A lightweight and intuitive Node.js client for the Paxsenix AI API.
Easily integrate AI-powered chat completions, streaming responses, model listing, and more—right into your app.
Free to use with a rate limit of 5 requests per minute.
Need more? API key support with higher limits! :)
📋 Table of Contents
🚀 Features
- Chat Completions – Generate AI-powered responses with ease
- Streaming Responses – Get output in real-time as the AI types
- Model Listing – Retrieve available model options
- Planned – Image generation, embeddings, and more (coming soon)
📦 Installation
npm install @paxsenix/ai📖 Usage
Initialize the Client
import PaxSenixAI from '@paxsenix/ai';
// Without API key (free access)
const paxsenix = new PaxSenixAI();
// With API key
const paxsenix = new PaxSenixAI('YOUR_API_KEY');
// Advanced usage
const paxsenix = new PaxSenixAI('YOUR_API_KEY', {
timeout: 30000, // Request timeout in ms
retries: 3, // Number of retry attempts
retryDelay: 1000 // Delay between retries in ms
});Chat Completions (Non-Streaming)
const response = await paxsenix.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a sarcastic assistant.' },
{ role: 'user', content: 'Wassup beach' }
],
temperature: 0.7,
max_tokens: 100
});
console.log(response.choices[0].message.content);
console.log('Tokens used:', response.usage.total_tokens);Or using resource-specific API:
const chatResponse = await paxsenix.Chat.createCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a sarcastic assistant.' },
{ role: 'user', content: 'Who tf r u?' }
]
});
console.log(chatResponse.choices[0].message.content);Chat Completions (Streaming)
// Simple callback approach
await paxsenix.Chat.streamCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello!' }]
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || '')
);
// With error handling
await paxsenix.Chat.streamCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'Hello!' }
]
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || ''),
(error) => console.error('Error:', error),
() => console.log('Done!')
);
// Using async generator (recommended)
for await (const chunk of paxsenix.Chat.streamCompletionAsync({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'Hello!' }
]
})) {
const content = chunk.choices?.[0]?.delta?.content;
if (content) process.stdout.write(content);
}List Available Models
const models = await paxsenix.listModels();
console.log(models.data);🛠️ Error Handling
try {
const response = await paxsenix.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello!' }]
});
} catch (error) {
console.error('Status:', error.status);
console.error('Message:', error.message);
console.error('Data:', error.data);
}⏱️ Rate Limits
- Free access allows up to 5 requests per minute.
- Higher rate limits and API key support are planned.
- API keys will offer better stability and priority access.
🚧 Upcoming Features
- Image Generation
- Embeddings Support
📜 License
MIT License. See LICENSE for full details. :)
💬 Feedback & Contributions
Pull requests and issues are welcome.
Feel free to fork, submit PRs, or just star the repo if it's helpful :P
