npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@vezlo/assistant-server

v2.13.0

Published

Production-ready AI Assistant Server with advanced RAG (chunk-based semantic search + adjacent retrieval), conversation management, real-time communication, and human agent handoff

Readme

Vezlo AI Assistant Server

npm version license

🚀 Production-ready Node.js/TypeScript API server for the Vezlo AI Assistant platform - Complete backend APIs with advanced RAG (chunk-based semantic search + adjacent retrieval), Docker deployment, and database migrations.

📋 Changelog | 🐛 Report Issue | 💬 Discussions

🚨 Breaking Change Notice

v2.3.0 - Enhanced RAG System

New chunk-based architecture with adjacent retrieval for better code understanding.

  • Database Schema: New vezlo_knowledge_chunks table and RPC functions
  • Embedding Model: Upgraded to text-embedding-3-large (3072 dimensions)
  • Migration: Automatic via npm run migrate:latest (migration 006)
  • Rollback: Supported via npm run migrate:rollback

Upgrade Steps:

npm install @vezlo/assistant-server@latest
npm run migrate:latest

v2.0.0 - Multi-tenancy Support

Introduced multi-tenancy with authentication. Existing data not auto-migrated.

See CHANGELOG.md for complete migration guide.


🏗️ Architecture

  • Backend APIs - RESTful API endpoints for AI chat and knowledge management
  • AI Response Validation - LLM-as-Judge validation with developer/user modes via @vezlo/ai-validator
  • Real-time Communication - WebSocket support for live chat with Supabase Realtime broadcasting
  • Human Agent Handoff - Agent join/leave workflows with realtime status updates and message synchronization
  • Advanced RAG System - Chunk-based semantic search with adjacent retrieval using OpenAI text-embedding-3-large (3072 dims) and pgvector
  • Conversation Management - Persistent conversation history with agent support
  • Database Tools - Connect external Supabase databases for natural language data queries (see docs)
  • Slack Integration - Direct query bot with full AI responses, conversation history, and reaction-based feedback (setup guide)
  • Feedback System - Message rating and improvement tracking
  • Database Migrations - Knex.js migration system for schema management
  • Production Ready - Docker containerization with health checks

📦 Installation

Option 1: Install from npm (Recommended)

# Install globally
npm install -g @vezlo/assistant-server

# Or install in your project
npm install @vezlo/assistant-server

Option 2: Clone from GitHub

git clone https://github.com/vezlo/assistant-server.git
cd assistant-server
npm install

🏪 Vercel Marketplace Integration

🚀 Recommended for Vercel Users - Deploy with automated setup:

Install on Vercel

The Vercel Marketplace integration provides:

  • Guided Configuration - Step-by-step setup wizard
  • Automatic Environment Setup - No manual configuration needed
  • Database Migration - Automatic table creation
  • Production Optimization - Optimized for Vercel's serverless platform

Learn more about the marketplace integration →

🚀 Quick Start (Interactive Setup)

Prerequisites

  • Node.js 20+ and npm 9+
  • Supabase project
  • OpenAI API key

Easy Setup with Interactive Wizard

The fastest way to get started is with our interactive setup wizard:

# If installed globally
vezlo-setup

# If installed locally
npx vezlo-setup

# Or if cloned from GitHub
npm run setup

The wizard will guide you through:

  1. Supabase Configuration - URL, Service Role Key, DB host/port/name/user/password (with defaults)
  2. OpenAI Configuration - API key, model, temperature, max tokens
  3. Validation (non‑blocking) - Tests Supabase API and DB connectivity
  4. Migrations - Runs Knex migrations if DB validation passes; otherwise shows how to run later
  5. Environment - Generates .env (does not overwrite if it already exists)
  6. Default Data Seeding - Creates default admin user and company
  7. API Key Generation - Generates API key for the default company

After setup completes, start the server:

vezlo-server

Manual Setup (Advanced)

If you prefer manual configuration:

1. Create Environment File

# Copy example file
cp env.example .env

# Edit with your credentials
nano .env

2. Configure Database

Get your Supabase credentials from:

  • Dashboard → Settings → API
  • Database → Settings → Connection string
# Supabase Configuration
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-role-key

# Database Configuration for Migrations
SUPABASE_DB_HOST=db.your-project.supabase.co
SUPABASE_DB_PORT=5432
SUPABASE_DB_NAME=postgres
SUPABASE_DB_USER=postgres
SUPABASE_DB_PASSWORD=your-database-password

# OpenAI Configuration
OPENAI_API_KEY=sk-your-api-key
AI_MODEL=gpt-4o

# Migration Security
MIGRATION_SECRET_KEY=your-secure-migration-key-here

# AI Response Validation (Optional)
AI_VALIDATION_ENABLED=false

# Developer Mode (Optional)
# true = Strict code grounding for technical queries
# false = User-friendly generic responses
DEVELOPER_MODE=false

3. Run Database Migrations (Recommended)

# Using Knex migrations (primary method)
npm run migrate:latest

# Or via API after server is running
curl "http://localhost:3000/api/migrate?key=$MIGRATION_SECRET_KEY"

4. Create Default Admin & Generate API Key

# Create default admin user and company (if not exists)
npm run seed-default

# Seed AI settings for existing companies (optional, auto-created for new companies)
npm run seed-ai-settings

# Generate API key for library integration
npm run generate-key

Optional fallback (not recommended if using migrations):

# Run raw SQL in Supabase Dashboard → SQL Editor
cat database-schema.sql

5. Validate Setup

# Verify database connection and tables
vezlo-validate

# Or with npm
npm run validate

6. Start Server

# If installed globally
vezlo-server

# If installed locally
npx vezlo-server

# Or from source
npm run build && npm start

Docker Setup

  1. Copy the environment template and fill in your Supabase/OpenAI values:
    cp env.example .env
    # edit .env with your credentials before continuing
  2. Build and start the stack:
    docker-compose build
    docker-compose up -d
    The entrypoint runs migrations, seeds the default org/admin, and generates an API key automatically.
  3. View container logs:
    docker-compose logs -f vezlo-server

☁️ Vercel Deployment

Deploy to Vercel's serverless platform with multiple options. The Marketplace integration collects your credentials during configuration and sets environment variables automatically.

Option 1: Vercel Marketplace Integration (Recommended)

🚀 Deploy via Vercel Marketplace - Automated setup with guided configuration:

Install on Vercel

Benefits:

  • Guided Setup - Step-by-step configuration wizard
  • Automatic Environment Variables - No manual env var configuration needed
  • Database Migration - Automatic table creation and schema setup
  • Production Ready - Optimized for Vercel's serverless platform

After Installation:

  1. Run the migration URL: https://your-project.vercel.app/api/migrate?key=YOUR_MIGRATION_SECRET
  2. Verify deployment: https://your-project.vercel.app/health
  3. Access API docs: https://your-project.vercel.app/docs

Option 2: One-Click Deploy Button

Deploy with Vercel

This will:

  • Fork the repository to your GitHub
  • Create a Vercel project
  • Require marketplace integration setup
  • Deploy automatically

Option 3: Manual Vercel Deploy

# Install Vercel CLI
npm i -g vercel

# Deploy
vercel

# Follow prompts to configure

Prerequisites for Vercel

  1. Supabase project (URL, Service Role key, DB host/port/name/user/password)
  2. OpenAI API key
  3. If not using the Marketplace, add environment variables in Vercel project settings
  4. Disable Vercel Deployment Protection if the API needs to be publicly accessible; otherwise Vercel shows its SSO page and the browser never reaches your server.

See docs/VERCEL_DEPLOYMENT.md for detailed deployment guide.

🔧 Environment Configuration

Edit .env file with your credentials:

# REQUIRED - Supabase Configuration
SUPABASE_URL=https://your-project-id.supabase.co
SUPABASE_SERVICE_KEY=your-service-role-key

# REQUIRED - Database Configuration for Knex.js Migrations
SUPABASE_DB_HOST=db.your-project.supabase.co
SUPABASE_DB_PORT=5432
SUPABASE_DB_NAME=postgres
SUPABASE_DB_USER=postgres
SUPABASE_DB_PASSWORD=your-database-password

# REQUIRED - OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key
AI_MODEL=gpt-4o
AI_TEMPERATURE=0.7
AI_MAX_TOKENS=1000

# REQUIRED - Database Migration Security
MIGRATION_SECRET_KEY=your-secure-migration-key-here

# REQUIRED - Authentication
JWT_SECRET=your-super-secret-jwt-key-here-change-this-in-production
[email protected]
DEFAULT_ADMIN_PASSWORD=admin123

# OPTIONAL - Server Configuration
PORT=3000
NODE_ENV=production
LOG_LEVEL=info

# OPTIONAL - CORS Configuration
CORS_ORIGINS=http://localhost:3000,http://localhost:5173

# OPTIONAL - Swagger Base URL
BASE_URL=http://localhost:3000

# OPTIONAL - Rate Limiting
RATE_LIMIT_WINDOW=60000
RATE_LIMIT_MAX=100

# OPTIONAL - Organization Settings
ORGANIZATION_NAME=Vezlo
ASSISTANT_NAME=Vezlo Assistant

# OPTIONAL - Knowledge Base (uses text-embedding-3-large, 3072 dims)
CHUNK_SIZE=1000
CHUNK_OVERLAP=200

🔧 CLI Commands

The package provides these command-line tools:

vezlo-setup

Interactive setup wizard that guides you through configuration.

vezlo-setup

vezlo-seed-default

Creates default admin user and company.

vezlo-seed-default

vezlo-seed-ai-settings

Seeds or updates AI settings for all existing companies with default values.

vezlo-seed-ai-settings

vezlo-generate-key

Generates API key for the default admin's company. The API key is used by src-to-kb library.

vezlo-generate-key

vezlo-validate

Validates database connection and verifies all tables exist.

vezlo-validate

vezlo-server

Starts the API server.

vezlo-server

📚 API Documentation

Base URL

http://localhost:3000/api

Interactive Documentation

  • Swagger UI: http://localhost:3000/docs
  • Health Check: http://localhost:3000/health

Core Endpoints

Conversations

  • POST /api/conversations - Create new conversation (public widget endpoint)
  • GET /api/conversations - List company conversations (agent dashboard)
  • GET /api/conversations/:uuid - Get conversation with messages
  • DELETE /api/conversations/:uuid - Delete conversation
  • POST /api/conversations/:uuid/join - Agent joins a conversation
  • POST /api/conversations/:uuid/messages/agent - Agent sends a message
  • POST /api/conversations/:uuid/close - Agent closes a conversation

Messages

  • POST /api/conversations/:uuid/messages - Create user message
  • POST /api/messages/:uuid/generate - Generate AI response

Knowledge Base

  • POST /api/knowledge/items - Create knowledge item (supports raw content, pre-chunked data, or chunks with embeddings)
  • GET /api/knowledge/items - List knowledge items
  • GET /api/knowledge/items/:uuid - Get knowledge item
  • PUT /api/knowledge/items/:uuid - Update knowledge item
  • DELETE /api/knowledge/items/:uuid - Delete knowledge item

Knowledge Ingestion Options:

  • Raw Content: Send content field, server creates chunks and embeddings
  • Pre-chunked: Send chunks array with hasEmbeddings: false, server generates embeddings
  • Chunks + Embeddings: Send chunks array with embeddings and hasEmbeddings: true, server stores directly

Database Migrations

  • GET /api/migrate?key=<secret> - Run pending database migrations
  • GET /api/migrate/status?key=<secret> - Check migration status

Migration Workflow:

  1. Create Migration: Use npm run migrate:make migration_name to create new migration files
  2. Check Status: Use /api/migrate/status to see pending migrations
  3. Run Migrations: Use /api/migrate to execute pending migrations remotely

Migration Endpoints Usage:

# Check migration status
curl "http://localhost:3000/api/migrate/status?key=your-migration-secret-key"

# Run pending migrations
curl "http://localhost:3000/api/migrate?key=your-migration-secret-key"

Required Environment Variable:

  • MIGRATION_SECRET_KEY - Secret key for authenticating migration requests

Migration Creation Example:

# Create a new migration
npm run migrate:make add_users_table

# This creates: src/migrations/002_add_users_table.ts
# Edit the file to add your schema changes
# Then run via endpoint or command line

Knowledge Search

  • POST /api/knowledge/search - Search knowledge base

Feedback

  • POST /api/feedback - Submit message feedback (Public API)
  • DELETE /api/feedback/:uuid - Delete/undo message feedback (Public API)

API Keys (Admin Only)

  • POST /api/api-keys - Generate or update company API key
  • GET /api/api-keys/status - Check if API key exists for company

Team Management (Admin Only)

  • POST /api/companies/:companyUuid/team - Create team member
  • GET /api/companies/:companyUuid/team - List team members (with pagination and search)
  • PUT /api/companies/:companyUuid/team/:userUuid - Update team member (name, role, status, password)
  • DELETE /api/companies/:companyUuid/team/:userUuid - Remove team member

Account Settings

  • GET /api/account/profile - Get current user's profile
  • PUT /api/account/profile - Update current user's name and password

WebSocket Events

  • join-conversation - Join conversation room
  • conversation:message - Real-time message updates

💬 Conversation 2-API Flow

The conversation system follows the industry-standard 2-API flow pattern for AI chat applications:

1. Create User Message

POST /api/conversations/{conversation-uuid}/messages

Purpose: Store the user's message in the conversation Response: Returns the user message with UUID

2. Generate AI Response

POST /api/messages/{message-uuid}/generate

Purpose: Generate AI response based on the user message Response: Returns the AI assistant's response

Why 2-API Flow?

This pattern is the global recognized standard because:

Separation of Concerns

  • User message storage is separate from AI generation
  • Allows for message persistence even if AI generation fails
  • Enables message history and conversation management

Reliability & Error Handling

  • User messages are saved immediately
  • AI generation can be retried independently
  • Partial failures don't lose user input

Scalability

  • AI generation can be queued/processed asynchronously
  • Different rate limits for storage vs generation
  • Enables streaming responses and real-time updates

Industry Standard

  • Used by OpenAI, Anthropic, Google, and other major AI platforms
  • Familiar pattern for developers
  • Enables advanced features like message regeneration, threading, and branching

Example Flow:

# 1. User sends message
curl -X POST /api/conversations/abc123/messages \
  -d '{"content": "How do I integrate your API?"}'
# Response: {"uuid": "msg456", "content": "How do I integrate your API?", ...}

# 2. Generate AI response
curl -X POST /api/messages/msg456/generate \
  -d '{}'
# Response: {"uuid": "msg789", "content": "To integrate our API...", ...}

🗄️ Database Setup

Option A: Run Migrations (Recommended)

Use the built‑in migration endpoints to create/upgrade tables:

# Run pending migrations
curl "http://localhost:3000/api/migrate?key=your-migration-secret-key"

# Check migration status
curl "http://localhost:3000/api/migrate/status?key=your-migration-secret-key"

These endpoints execute Knex migrations and keep schema versioned.

Option B: Manual SQL (Fallback)

If you prefer manual setup, run the SQL schema in Supabase SQL Editor:

# View the schema SQL locally
cat database-schema.sql

# Copy into Supabase Dashboard → SQL Editor and execute

The database-schema.sql contains all required tables and functions.

🐳 Docker Commands

# Start services
docker-compose up -d

# View logs
docker-compose logs -f vezlo-server

# Stop services
docker-compose down

# Rebuild and start
docker-compose up -d --build

# View running containers
docker-compose ps

# Access container shell
docker exec -it vezlo-server sh

🧪 Testing the API

Health Check

curl http://localhost:3000/health

Complete Conversation Flow

# 1. Create conversation
CONV_UUID=$(curl -X POST http://localhost:3000/api/conversations \
  -H "Content-Type: application/json" \
  -d '{"title": "Test Conversation", "user_uuid": 12345, "company_uuid": 67890}' \
  | jq -r '.uuid')

# 2. Send user message
MSG_UUID=$(curl -X POST http://localhost:3000/api/conversations/$CONV_UUID/messages \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello, how can you help me?"}' \
  | jq -r '.uuid')

# 3. Generate AI response
curl -X POST http://localhost:3000/api/messages/$MSG_UUID/generate \
  -H "Content-Type: application/json" \
  -d '{}'

Search Knowledge Base

curl -X POST http://localhost:3000/api/knowledge/search \
  -H "Content-Type: application/json" \
  -d '{
    "query": "How to use the API?",
    "limit": 5,
    "threshold": 0.7,
    "type": "hybrid"
  }'

🔧 Development

Local Development Setup

# Install dependencies
npm install

# Build TypeScript
npm run build

# Start server (Node)
npm start

# Or start via CLI wrapper
npx vezlo-server

# Run tests
npm test

Project Structure

vezlo/
├── docs/                # Documentation
│   ├── DEVELOPER_GUIDELINES.md
│   └── MIGRATIONS.md
├── src/
│   ├── config/          # Configuration files
│   ├── controllers/     # API route handlers
│   ├── middleware/      # Express middleware
│   ├── schemas/         # API request/response schemas
│   ├── services/        # Business logic services
│   ├── storage/         # Database repositories
│   ├── types/           # TypeScript type definitions
│   ├── migrations/      # Database migrations
│   └── server.ts        # Main application entry
├── scripts/             # Utility scripts
├── Dockerfile           # Production container
├── docker-compose.yml   # Docker Compose configuration
├── knexfile.ts          # Database configuration
├── env.example          # Environment template
├── package.json         # Dependencies and scripts
└── tsconfig.json        # TypeScript configuration

🚀 Production Deployment

Environment Variables

Ensure all required environment variables are set:

  • SUPABASE_URL and SUPABASE_SERVICE_KEY (required)
  • SUPABASE_DB_HOST, SUPABASE_DB_PASSWORD (required for migrations)
  • OPENAI_API_KEY (required)
  • MIGRATION_SECRET_KEY (required for migration endpoints)
  • JWT_SECRET (required for authentication)
  • DEFAULT_ADMIN_EMAIL and DEFAULT_ADMIN_PASSWORD (required for initial setup)
  • NODE_ENV=production
  • CORS_ORIGINS (set to your domain)
  • BASE_URL (optional, for custom Swagger server URL)

Docker Production

# Build production image
docker build -t vezlo-server .

# Run production container
docker run -d \
  --name vezlo-server \
  -p 3000:3000 \
  --env-file .env \
  vezlo-server

Health Monitoring

  • Health check endpoint: /health
  • Docker health check configured
  • Logs available in ./logs/ directory

Database Migrations in Production

# Check migration status
curl "https://your-domain.com/api/migrate/status?key=your-migration-secret-key"

# Run pending migrations
curl "https://your-domain.com/api/migrate?key=your-migration-secret-key"

🤝 Contributing

Development Workflow

  1. Fork the repository
  2. Create feature branch: git checkout -b feature/new-feature
  3. Make changes and test locally
  4. Run tests: npm test
  5. Commit: git commit -m 'Add new feature'
  6. Push: git push origin feature/new-feature
  7. Submit pull request

Code Standards

  • TypeScript - Full type safety required
  • ESLint - Code formatting and quality
  • Prettier - Consistent code style
  • Tests - Unit tests for new features
  • Documentation - Update README for API changes

API Development

  • Follow RESTful conventions
  • Use proper HTTP status codes
  • Include comprehensive error handling
  • Update Swagger documentation
  • Add request/response schemas

📊 Performance & Security

Performance

  • Response Time: Optimized for fast API responses
  • Concurrent Users: Supports multiple concurrent users
  • Memory Usage: Efficient memory management
  • Database: Supabase vector operations integration

Security Features

  • Rate Limiting - Configurable request limits
  • CORS Protection - Configurable origins
  • Input Validation - Request schema validation
  • Error Handling - Secure error responses
  • Health Monitoring - Application logs and Docker health checks

📚 Documentation

📄 License

This project is dual-licensed:

  • Non-Commercial Use: Free under AGPL-3.0 license
  • Commercial Use: Requires a commercial license - contact us for details

Status: ✅ Production Ready | Version: 2.13.0 | Node.js: 20+ | TypeScript: 5+