npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@paxsenix/ai

v0.1.5

Published

A lightweight and intuitive Node.js client for the Paxsenix AI API.

Readme

@paxsenix/ai

A lightweight and intuitive Node.js client for the Paxsenix AI API.
Easily integrate AI-powered chat completions, streaming responses, model listing, and more—right into your app.

Free to use with a rate limit of 5 requests per minute.
Need more? API key support with higher limits! :)

Static Badge GitHub top language GitHub Repo stars GitHub issues NPM Downloads


📋 Table of Contents


🚀 Features

  • Chat Completions – Generate AI-powered responses with ease
  • Streaming Responses – Get output in real-time as the AI types
  • Model Listing – Retrieve available model options
  • Planned – Image generation, embeddings, and more (coming soon)

📦 Installation

npm install @paxsenix/ai

📖 Usage

Initialize the Client

import PaxSenixAI from '@paxsenix/ai';

// Without API key (free access)
const paxsenix = new PaxSenixAI();

// With API key
const paxsenix = new PaxSenixAI('YOUR_API_KEY');

// Advanced usage
const paxsenix = new PaxSenixAI('YOUR_API_KEY', {
  timeout: 30000, // Request timeout in ms
  retries: 3, // Number of retry attempts
  retryDelay: 1000 // Delay between retries in ms
});

Chat Completions (Non-Streaming)

const response = await paxsenix.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'system', content: 'You are a sarcastic assistant.' },
    { role: 'user', content: 'Wassup beach' }
  ],
  temperature: 0.7,
  max_tokens: 100
});

console.log(response.choices[0].message.content);
console.log('Tokens used:', response.usage.total_tokens);

Or using resource-specific API:

const chatResponse = await paxsenix.Chat.createCompletion({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'system', content: 'You are a sarcastic assistant.' },
    { role: 'user', content: 'Who tf r u?' }
  ]
});

console.log(chatResponse.choices[0].message.content);

Chat Completions (Streaming)

// Simple callback approach
await paxsenix.Chat.streamCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello!' }] 
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || '')
);

// With error handling
await paxsenix.Chat.streamCompletion({ 
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'user', content: 'Hello!' }
  ] 
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || ''),
  (error) => console.error('Error:', error),
  () => console.log('Done!')
);

// Using async generator (recommended)
for await (const chunk of paxsenix.Chat.streamCompletionAsync({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'user', content: 'Hello!' }
  ]
})) {
  const content = chunk.choices?.[0]?.delta?.content;
  if (content) process.stdout.write(content);
}

List Available Models

const models = await paxsenix.listModels();
console.log(models.data);

🛠️ Error Handling

try {
  const response = await paxsenix.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'Hello!' }]
  });
} catch (error) {
  console.error('Status:', error.status);
  console.error('Message:', error.message);
  console.error('Data:', error.data);
}

⏱️ Rate Limits

  • Free access allows up to 5 requests per minute.
  • Higher rate limits and API key support are planned.
  • API keys will offer better stability and priority access.

🚧 Upcoming Features

  • Image Generation
  • Embeddings Support

📜 License

MIT License. See LICENSE for full details. :)


💬 Feedback & Contributions

Pull requests and issues are welcome.
Feel free to fork, submit PRs, or just star the repo if it's helpful :P