npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

vspagent

v2.0.0

Published

AI-powered agent with information about Vishnu Suresh Perumbavoor. Powered by Qwen2.5-0.5B for intelligent conversations.

Downloads

107

Readme

vspagent npm package 🤖 AI-Powered Agent

Description

This npm package provides an AI-powered agent with information about Vishnu Suresh Perumbavoor. Powered by Qwen2.5-0.5B by Alibaba Cloud for intelligent conversations.

🆕 What's New in v2.0

  • AI-Powered Conversations - Chat with an AI that knows all about VSP
  • Qwen2.5-0.5B - Fast and efficient model by Alibaba Cloud (500M parameters)
  • Lightweight - Only ~500MB model download
  • Streaming Support - Real-time response streaming
  • Backward Compatible - Original static data API still works
  • Runs Locally - No API keys needed, runs on your machine via Transformers.js

Prerequisites

  • Node.js 18+ (for ESM support in Transformers.js)
  • 1-2GB RAM
  • Internet connection (first run only, to download ~500MB model)

Installation

npm install vspagent

Quick Start

🎯 Interactive Chat via CLI (Recommended!)

After installing, chat directly with the AI agent from command line:

# Install globally
npm install -g vspagent

# Start chatting with the agent!
vspagent

Or install locally and use:

# Install in your project
npm install vspagent

# Run chat
npx vspagent
# or
npm run chat

This starts an interactive conversation with the AI agent where you can ask multiple questions!


Option 1: Static Data (Original Feature - Backward Compatible)

const vspagent = require('vspagent');

// Access static biodata
console.log(vspagent.name);        // "VSP Agent"
console.log(vspagent.creator);     // "Vishnu Suresh Perumbavoor"
console.log(vspagent.biodata);     // Full biodata object
console.log(vspagent.socials);     // Social media links

Option 2: AI-Powered Chat (New Feature 🚀)

const vspagent = require('vspagent');

async function chat() {
  // Initialize AI (first time downloads model ~500MB)
  await vspagent.initAI();
  
  // Ask questions about VSP
  const response = await vspagent.chat("Who is Vishnu Suresh Perumbavoor?");
  console.log(response);
  
  // Ask about accomplishments
  const response2 = await vspagent.chat("Tell me about VSP's achievements");
  console.log(response2);
  
  // Get social media info
  const response3 = await vspagent.chat("How can I connect with VSP?");
  console.log(response3);
}

chat();

Option 3: Interactive Continuous Chat (Best! 🎯)

# Run the interactive chat interface
node chat.js

This gives you a real continuous conversation:

🤖 VSP Agent - Interactive Chat Mode
============================================================
✅ Model loaded successfully!
============================================================

💬 You: Who is VSP?
🤖 VSP Agent: Vishnu Suresh Perumbavoor is an engineer, entrepreneur...

💬 You: What hackathons did he win?
🤖 VSP Agent: He won 3rd prize in Vaiga Agrihack 2023...

💬 You: Tell me more about his work
🤖 VSP Agent: He works at Trenser and has experience with...

💬 You: exit
👋 Thanks for chatting! Goodbye!

Option 4: Streaming Chat

const vspagent = require('vspagent');

async function streamChat() {
  await vspagent.initAI();
  
  // Streaming response (real-time output)
  await vspagent.chatStream("Tell me about VSP's hackathon achievements");
}

streamChat();

API Reference

Static Data (Original API)

All original properties are available:

vspagent.name              // "VSP Agent"
vspagent.creator           // "Vishnu Suresh Perumbavoor"
vspagent.founderOf         // "VSP dot AI"
vspagent.createdOn         // "28 April 2023"
vspagent.whoIsHe           // Array of roles
vspagent.interests         // Array of interests
vspagent.entertainments    // Array of entertainment preferences
vspagent.internships       // Array of internships
vspagent.placement         // Current placement
vspagent.accomplishments   // Array of accomplishments
vspagent.participations    // Array of event participations
vspagent.socials           // Object with social media links
vspagent.featured          // Featured media links
vspagent.biodata           // Complete biodata object

AI-Powered Methods (New)

initAI(options?)

Initialize the Qwen2.5-0.5B AI model.

Parameters:

  • options (object, optional): Model options
    • dtype (string): Data type/quantization. Default: "q4"
    • device (string): Device to use. Default: "auto"

Returns: Promise

Example:

// Initialize with default settings
await vspagent.initAI();

// Custom options
await vspagent.initAI({
  dtype: "q4",
  device: "cpu"
});

chat(userMessage, options?)

Chat with the AI bot about VSP.

Parameters:

  • userMessage (string): Your question or message
  • options (object, optional): Generation options
    • max_new_tokens (number): Maximum tokens to generate. Default: 512
    • temperature (number): Sampling temperature. Default: 0.7
    • do_sample (boolean): Enable sampling. Default: false

Returns: Promise - AI response

Example:

const response = await vspagent.chat(
  "What are VSP's interests?",
  { max_new_tokens: 256, temperature: 0.8 }
);

chatStream(userMessage, options?)

Stream AI responses in real-time (outputs to console).

Parameters:

  • userMessage (string): Your question
  • options (object, optional): Same as chat()

Returns: Promise

Example:

await vspagent.chatStream("Tell me about VSP's achievements");
// Output streams to console in real-time

getModelInfo()

Get information about the AI model being used.

Returns: Object with model details

Example:

const modelInfo = vspagent.getModelInfo();
console.log(modelInfo.name);        // "Qwen2.5-0.5B-Instruct"
console.log(modelInfo.size);        // "0.5B parameters"
console.log(modelInfo.provider);    // "Alibaba Cloud - Qwen Team"

AI Model

Qwen2.5-0.5B-Instruct by Alibaba Cloud

| Feature | Details | |---------|---------| | Size | 0.5B parameters (~500MB download) | | Speed | ⚡⚡⚡⚡ Very Fast | | Quality | ⭐⭐⭐ Good | | RAM | 1-2GB | | Provider | Alibaba Cloud - Qwen Team | | Model ID | onnx-community/Qwen2.5-0.5B-Instruct |

Why Qwen2.5-0.5B?

  • ✅ Lightweight and fast
  • ✅ Great for conversational AI
  • ✅ Excellent multilingual support (English & Chinese)
  • ✅ Low resource requirements
  • ✅ Optimized with ONNX and quantization

Examples

Basic Usage

const vspagent = require('vspagent');

// Check if AI is enabled
console.log('AI Enabled:', vspagent.aiEnabled);    // true
console.log('Version:', vspagent.version);         // "2.0.0"

// Use static data
console.log('Creator:', vspagent.creator);
console.log('LinkedIn:', vspagent.socials.linkedin);

AI Chat Examples

const vspagent = require('vspagent');

async function examples() {
  // Initialize once
  await vspagent.initAI();
  
  // Example 1: General info
  const resp1 = await vspagent.chat("Who is VSP?");
  
  // Example 2: Specific questions
  const resp2 = await vspagent.chat("What hackathons did VSP participate in?");
  
  // Example 3: Social media
  const resp3 = await vspagent.chat("Give me VSP's social media links");
  
  // Example 4: Interests
  const resp4 = await vspagent.chat("What does VSP like to do?");
}

examples();

Interactive Chat via Terminal

# If installed globally
vspagent

# If installed locally
npx vspagent

# Or using npm script
npm run chat

This opens an interactive chat interface where you can have continuous conversations with the AI agent!

Performance Tips

  1. First Run: Model downloads on first use (~500MB). Subsequent runs are instant.
  2. Memory: Uses ~1-2GB RAM during inference
  3. Caching: Model is initialized once and cached for subsequent calls
  4. Storage: Model cached in ~/.cache/huggingface/ (~500MB disk space needed)

How It Works

  1. Transformers.js: Uses HuggingFace Transformers.js for Node.js
  2. ONNX Runtime: Model runs via ONNX Runtime (optimized inference)
  3. Local Execution: Everything runs locally, no API calls needed
  4. Qwen2.5: Latest model from Alibaba Cloud's Qwen team
  5. Multilingual: Supports both English and Chinese natively
  6. Quantization: Q4 quantization for faster inference and smaller size

Troubleshooting

Error: Module not found

# Make sure transformers is installed
npm install @huggingface/transformers

Model download fails

  • Check internet connection
  • Model downloads to ~/.cache/huggingface/ by default
  • Need ~500MB free disk space

Out of memory

  • Close other memory-intensive applications
  • Reduce max_new_tokens in chat options (try 128 or 256)
  • Restart Node.js process

Backward Compatibility

✅ All original v1.x features work unchanged:

const vspagent = require('vspagent');

// v1.x code still works perfectly
console.log(vspagent);              // Shows all properties
console.log(vspagent.name);         // "VSP Agent"
console.log(vspagent.creator);      // "Vishnu Suresh Perumbavoor"
console.log(vspagent.socials);      // Social links object

Version History

v2.0.0 (Latest) - AI-Powered Edition

  • Added AI chat capabilities powered by Qwen2.5-0.5B
  • Fast and lightweight model (500M parameters)
  • Streaming responses support
  • Transformers.js integration
  • Local execution (no API keys)

v1.0.x

  • Original static biodata API
  • Basic information export

Technical Details

  • Framework: Transformers.js v3.3.0+
  • Model: Qwen2.5-0.5B-Instruct (ONNX format)
  • Quantization: Q4 (4-bit quantization for efficiency)
  • Runtime: ONNX Runtime
  • Languages: English, Chinese (multilingual support)
  • Model Size: ~500MB download
  • Memory Usage: 1-2GB RAM during inference

Contact

For any inquiries or support, you can reach out to Vishnu Suresh Perumbavoor

LinkedIn github Twitter Linktree Instagram GMail

License

ISC

Credits

  • Created by: Vishnu Suresh Perumbavoor
  • AI Model: Qwen2.5-0.5B by Alibaba Cloud
  • Framework: HuggingFace Transformers.js
  • ONNX Models: onnx-community

🚀 Powered by Qwen2.5-0.5B - Alibaba Cloud's Latest AI Model