npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

azurecq

v1.1.1

Published

Azure Cheap Queues is a High-performance queue system based on Redis and Azure Storage with message durability guarantees

Readme

AzureCQ : Azure Cheap Queues

A high-performance, cost-effective queue system that combines Redis for hot caching with Azure Storage for durability. Built as a performant and cheaper alternative to Azure Service Bus.

🚀 Features

  • Hybrid Architecture: Redis for hot message cache + Azure Storage for persistence
  • High Performance: Redis caching and batch operations for optimal throughput, gaining between x10 and x20 performance compared to Azure Storage Queue alone
  • Large Message Support: Automatic handling of messages >64KB via Azure Blob Storage
  • Cost Effective: Around 80% cheaper than Azure Service Bus
  • Batch Operations: Efficient batch enqueue/dequeue with individual acknowledgments
  • Dead Letter Queue: Automatic retry logic with configurable DLQ for failed messages
  • Message Recovery: Manual and batch operations to move messages between queues
  • Connection Resilience: Automatic reconnection with exponential backoff for Redis and Azure
  • Health Monitoring: Built-in health checks and connection status monitoring
  • Queue Management: Create, list, and delete queues programmatically

📦 Installation

npm install azurecq

🏗️ Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────────┐
│   Application   │    │     AzureCQ      │    │   Azure Storage     │
│                 │◄──►│                  │◄──►│                     │
└─────────────────┘    │  ┌─────────────┐ │    │ ┌─────────────────┐ │
                       │  │    Redis    │ │    │ │ Queue Storage   │ │
                       │  │ (Hot Cache) │ │    │ │ (Persistence)   │ │
                       │  └─────────────┘ │    │ └─────────────────┘ │
                       │                  │    │ ┌─────────────────┐ │
                       │                  │    │ │ Blob Storage    │ │
                       │                  │    │ │ (Large Messages)│ │
                       └──────────────────┘    │ └─────────────────┘ │
                                               └─────────────────────┘

🚀 Quick Start

Basic Setup

import { AzureCQ, QueueConfiguration } from 'azurecq';

const config: QueueConfiguration = {
  name: 'my-queue',
  redis: {
    host: 'localhost',
    port: 6379,
    password: 'your-redis-password', // optional
    db: 0, // optional
    keyPrefix: 'azurecq:' // optional
  },
  azure: {
    connectionString: 'DefaultEndpointsProtocol=https;AccountName=...',
    queueName: 'my-azure-queue',
    containerName: 'my-blob-container'
  },
  settings: {
    maxInlineMessageSize: 64 * 1024, // 64KB
    redisCacheTtl: 3600, // 1 hour
    batchSize: 32,
    retry: {
      maxAttempts: 3,
      backoffMs: 1000
    },
    deadLetter: {
      enabled: true,
      maxDeliveryAttempts: 3,
      queueSuffix: '-dlq',
      messageTtl: 7 * 24 * 3600 // 7 days
    }
  }
};

const queue = new AzureCQ(config);
await queue.initialize();

Single Message Operations

// Enqueue a message
const message = await queue.enqueue('Hello, World!', {
  metadata: { userId: '123', type: 'greeting' },
  visibilityTimeout: 30, // seconds
  timeToLive: 3600 // seconds
});

console.log('Enqueued message:', message.id);

// Dequeue a message
const receivedMessage = await queue.dequeue({
  visibilityTimeout: 30,
  maxMessages: 1
});

if (receivedMessage) {
  console.log('Received:', receivedMessage.content.toString());
  
  // Process the message...
  
  // Acknowledge when done
  const ackResult = await queue.acknowledge(receivedMessage);
  if (ackResult.success) {
    console.log('Message acknowledged successfully');
  }
}

Batch Operations

// Enqueue multiple messages
const batch = await queue.enqueueBatch([
  { 
    content: 'Message 1', 
    options: { metadata: { priority: 'high' } } 
  },
  { 
    content: 'Message 2', 
    options: { metadata: { priority: 'low' } } 
  },
  { 
    content: Buffer.from('Binary message'), 
    options: { visibilityTimeout: 60 } 
  }
]);

console.log(`Enqueued ${batch.count} messages`);

// Dequeue multiple messages
const receivedBatch = await queue.dequeueBatch({
  maxMessages: 10,
  visibilityTimeout: 30
});

console.log(`Received ${receivedBatch.count} messages`);

// Process messages...
const processedMessages = [];
for (const msg of receivedBatch.messages) {
  // Your processing logic here
  console.log('Processing:', msg.id);
  processedMessages.push(msg);
}

// Acknowledge all processed messages
const batchAckResult = await queue.acknowledgeBatch(processedMessages);
console.log(`Acknowledged: ${batchAckResult.successCount}/${processedMessages.length}`);

Large Message Handling

AzureCQ automatically handles large messages (>64KB) by storing them in Azure Blob Storage:

const largeMessage = Buffer.alloc(100 * 1024, 'Large data content'); // 100KB

// This will automatically use blob storage
const message = await queue.enqueue(largeMessage, {
  metadata: { type: 'large-file', size: largeMessage.length }
});

// Dequeue works the same way - content is automatically retrieved from blob
const received = await queue.dequeue();
console.log('Large message size:', received?.content.length);

// Acknowledge the message
await queue.acknowledge(received);

Queue Management

import { QueueManager } from 'azurecq';

const manager = new QueueManager('your-azure-storage-connection-string');

// Create a new queue
await manager.createQueue('new-queue');

// List all queues
const queues = await manager.listQueues();
console.log('Available queues:', queues);

// Delete a queue
await manager.deleteQueue('old-queue');

Dead Letter Queue (DLQ) Operations

AzureCQ provides comprehensive dead letter queue support for handling message failures and ensuring reliable processing:

// Automatic retry and DLQ with enhanced dequeue
const { message, processor } = await queue.dequeueWithRetry();

if (message) {
  await processor(async () => {
    // Your processing logic here
    await processOrder(message.content);
    
    // If this throws, the message will automatically retry
    // or move to DLQ based on configuration
  });
}

Manual Message Operations

// Negative acknowledgment (NACK) - triggers retry or DLQ
const nackResult = await queue.nack(message, {
  reason: 'Payment gateway timeout',
  retryDelaySeconds: 60,  // Custom retry delay
  forceDlq: false         // Don't force to DLQ yet
});

// Manually move message to DLQ
const moveResult = await queue.moveToDeadLetter(message, 'Invalid data format');

// Move message back from DLQ to main queue
const recoveryResult = await queue.moveFromDeadLetter(messageId);

// Batch operations
const batchMoveResult = await queue.moveToDeadLetterBatch([
  { message: msg1, reason: 'Processing timeout' },
  { message: msg2, reason: 'Validation failed' }
]);

const batchRecoveryResult = await queue.moveFromDeadLetterBatch([
  'message-id-1', 'message-id-2'
]);

DLQ Monitoring and Management

// Get DLQ information
const dlqInfo = await queue.getDeadLetterInfo();
console.log(`DLQ: ${dlqInfo.messageCount} messages`);
console.log(`Oldest: ${dlqInfo.oldestMessage}`);
console.log(`Newest: ${dlqInfo.newestMessage}`);

// Purge all messages from DLQ
const purgedCount = await queue.purgeDeadLetter();
console.log(`Purged ${purgedCount} messages from DLQ`);

Message Acknowledgment & Timeouts

AzureCQ implements a robust acknowledgment system with automatic timeout handling to prevent message loss:

// Dequeue with visibility timeout
const message = await queue.dequeue({
  visibilityTimeout: 30 // Message invisible for 30 seconds
});

if (message) {
  try {
    // Process the message within the timeout window
    await processMessage(message.content);
    
    // Acknowledge successful processing
    await queue.acknowledge(message);
    console.log('Message processed and acknowledged');
    
  } catch (error) {
    // Handle processing failure
    await queue.nack(message, {
      reason: 'Processing failed',
      retryDelaySeconds: 60
    });
  }
}

// Enhanced pattern with automatic timeout handling
const { message, processor } = await queue.dequeueWithRetry({
  visibilityTimeout: 60 // Give enough time for processing
});

if (message) {
  await processor(async () => {
    // Your processing logic here
    await processOrder(message.content);
    // Automatic acknowledgment on success, retry/DLQ on failure
  });
}

What Happens When Messages Aren't Acknowledged?

  1. Visibility Timeout Expiry: If a message isn't acknowledged within the visibilityTimeout, it automatically becomes visible in the queue again
  2. Dequeue Count Increment: Each time a message is dequeued, its dequeueCount is incremented
  3. Automatic Retry: Messages can be retried up to the configured maxDeliveryAttempts
  4. Dead Letter Queue: After max attempts, messages move to DLQ for manual investigation
// Example timeline:
// T+0s:   Message dequeued (dequeueCount: 1, invisible for 30s)
// T+35s:  Timeout expires, message becomes visible again
// T+40s:  Message dequeued again (dequeueCount: 2, invisible for 30s)
// T+75s:  Timeout expires again, message visible
// T+80s:  Message dequeued third time (dequeueCount: 3)
// T+115s: After 3rd failure, message moves to DLQ

Best Practices for Timeouts

  • Set appropriate timeouts: Use 2-3x your expected processing time
  • Monitor unacknowledged messages: Track invisibleMessageCount in queue stats
  • Handle acknowledgment failures: Always check AcknowledgmentResult.success
  • Use enhanced dequeue: dequeueWithRetry() handles timeouts automatically

Connection Resilience & Monitoring

AzureCQ provides robust connection handling with automatic reconnection and comprehensive monitoring:

// Health monitoring
const health = await queue.healthCheck();
console.log('System Health:', {
  overall: health.overall,
  redis: health.redis,
  azure: health.azure,
  details: health.details
});

// Connection status monitoring (Redis)
const redisStatus = queue.redis.getConnectionStatus();
console.log('Redis Status:', {
  connected: redisStatus.isConnected,
  connecting: redisStatus.isConnecting,
  shouldReconnect: redisStatus.shouldReconnect,
  healthCheckActive: redisStatus.isHealthCheckActive
});

// Optimized reconnection handling
// Redis maintains persistent connections and auto-reconnects efficiently
// No expensive connection recreation during temporary failures
// Operations gracefully degrade when connections are unavailable

// Example: Operations continue working even with Redis down
const message = await queue.enqueue('Important message');
// ^ This persists to Azure Storage even if Redis is unavailable

const messages = await queue.dequeueBatch({ maxMessages: 5 });
// ^ Falls back to Azure Storage if Redis cache is unavailable

Connection Resilience Features

  • Persistent Connections: Redis connections stay open and are reused across operations
  • Built-in Reconnection: Leverages Redis's native reconnection with minimal overhead
  • No Connection Recreation: Avoids expensive connection creation during transient failures
  • Graceful Degradation: Operations continue with reduced performance when cache is unavailable
  • Health Monitoring: Real-time connection status and health checks
  • Production Ready: Comprehensive error handling and logging

Health Monitoring

// Check system health
const health = await queue.healthCheck();
console.log('System health:', health);

// Get queue statistics
const stats = await queue.getStats();
console.log('Queue stats:', {
  messageCount: stats.messageCount,
  name: stats.name
});

🔧 Configuration Options

QueueConfiguration

| Property | Type | Description | |----------|------|-------------| | name | string | Queue identifier | | redis | RedisConfig | Redis connection settings | | azure | AzureConfig | Azure Storage settings | | settings | QueueSettings | Performance and behavior settings |

RedisConfig

| Property | Type | Default | Description | |----------|------|---------|-------------| | host | string | - | Redis server hostname | | port | number | - | Redis server port | | password | string? | - | Redis password (optional) | | db | number? | 0 | Redis database number | | keyPrefix | string? | 'azurecq:' | Key prefix for Redis keys |

AzureConfig

| Property | Type | Description | |----------|------|-------------| | connectionString | string | Azure Storage connection string | | queueName | string | Azure Storage Queue name | | containerName | string | Azure Blob Storage container name |

QueueSettings

| Property | Type | Default | Description | |----------|------|---------|-------------| | maxInlineMessageSize | number | 65536 | Max message size before using blob storage (bytes) | | redisCacheTtl | number | 3600 | Redis cache TTL in seconds | | batchSize | number | 32 | Maximum batch size for operations | | retry.maxAttempts | number | 3 | Maximum retry attempts | | retry.backoffMs | number | 1000 | Retry backoff in milliseconds | | deadLetter.enabled | boolean | false | Enable dead letter queue functionality | | deadLetter.maxDeliveryAttempts | number | 3 | Max attempts before moving to DLQ | | deadLetter.queueSuffix | string | '-dlq' | Suffix for DLQ name | | deadLetter.messageTtl | number | 604800 | DLQ message TTL in seconds (7 days) |

⚡ Performance Characteristics

Standard Performance

  • Enqueue: 1000+ messages/second with batching
  • Dequeue: 800+ messages/second with hot cache hits
  • Hot Cache Hit: Sub-millisecond response times
  • Cold Storage: 10-50ms response times

Enhanced Performance Optimizations 🚀

AzureCQ includes advanced performance optimizations for high-throughput scenarios:

  • Binary Serialization: 40% faster than JSON, 30% smaller payload
  • Atomic Operations: Lua scripts for zero-latency batch operations
  • Object Pooling: Reduced garbage collection pressure
  • Concurrent Processing: 4x improvement with parallel workers
  • Optimized Configurations: Tuned presets for different workloads

Performance Profiles

// High-throughput configuration
const config = {
  // ... other settings
  redis: {
    ...PerformancePresets.HIGH_THROUGHPUT.redis,
    performanceProfile: 'HIGH_THROUGHPUT'
  },
  settings: {
    batchSize: 64,        // Larger batches
    redisCacheTtl: 1800,  // 30 minutes
  }
};

// Low-latency configuration  
const config = {
  // ... other settings
  redis: {
    ...PerformancePresets.LOW_LATENCY.redis,
    performanceProfile: 'LOW_LATENCY'
  },
  settings: {
    batchSize: 16,        // Smaller batches
    redisCacheTtl: 300,   // 5 minutes
  }
};

Benchmark Results

Run comprehensive performance tests:

cd examples
npm run perf-advanced  # Advanced performance demonstrations

Latency

  • Hot Cache Hit: <5ms
  • Azure Storage Fallback: 20-50ms
  • Large Message (Blob): 50-200ms

Cost Comparison

Assuming 1M messages/month:

  • Azure Service Bus Standard: ~$10/month
  • AzureCQ: ~$2/month (Redis + Storage costs)

🛡️ Error Handling

import { AzureCQError, ErrorCodes } from 'azurecq';

try {
  await queue.enqueue('test message');
} catch (error) {
  if (error instanceof AzureCQError) {
    switch (error.code) {
      case ErrorCodes.REDIS_CONNECTION_ERROR:
        console.log('Redis is down, but messages will still persist');
        break;
      case ErrorCodes.AZURE_STORAGE_ERROR:
        console.log('Azure Storage issue:', error.message);
        break;
      case ErrorCodes.MESSAGE_TOO_LARGE:
        console.log('Message exceeds maximum size');
        break;
      case ErrorCodes.DEAD_LETTER_QUEUE_ERROR:
        console.log('Dead letter queue operation failed:', error.message);
        break;
      case ErrorCodes.MESSAGE_MOVE_FAILED:
        console.log('Failed to move message between queues:', error.message);
        break;
      case ErrorCodes.REDIS_CONNECTION_ERROR:
        console.log('Redis connection issue - operations will fallback to Azure Storage:', error.message);
        break;
      default:
        console.log('Unknown error:', error.message);
    }
  }
}

🧪 Testing

# Run tests
npm test

# Run with coverage
npm test -- --coverage

📝 Best Practices

1. Dead Letter Queue Strategy

// Good: Configure appropriate DLQ settings for your use case
const config: QueueConfiguration = {
  // ... other config
  settings: {
    deadLetter: {
      enabled: true,
      maxDeliveryAttempts: 5,    // More attempts for transient errors
      queueSuffix: '-failed',   // Clear naming
      messageTtl: 30 * 24 * 3600 // 30 days for investigation
    }
  }
};

// Good: Handle different error types appropriately
const { message, processor } = await queue.dequeueWithRetry();
await processor(async () => {
  try {
    await processMessage(message);
  } catch (error) {
    if (error.code === 'VALIDATION_ERROR') {
      // Permanent error - move to DLQ immediately
      throw new Error(`Validation failed: ${error.message}`);
    } else if (error.code === 'TEMPORARY_SERVICE_ERROR') {
      // Will retry automatically with exponential backoff
      throw error;
    }
  }
});

// Good: Monitor and alert on DLQ growth
const dlqInfo = await queue.getDeadLetterInfo();
if (dlqInfo.messageCount > 100) {
  await sendAlert(`DLQ has ${dlqInfo.messageCount} messages - investigation required`);
}

2. Connection Resilience Strategy

// Good: Monitor connection health in production
setInterval(async () => {
  const health = await queue.healthCheck();
  if (!health.overall) {
    logger.warn('Queue system health check failed', health);
    // Implement alerting logic
  }
}, 30000); // Check every 30 seconds

// Good: Handle Redis disconnections gracefully
try {
  const result = await queue.enqueue(message);
  // Operation succeeded
} catch (error) {
  if (error.code === ErrorCodes.REDIS_CONNECTION_ERROR) {
    // Redis is down but message was still persisted to Azure Storage
    logger.warn('Redis unavailable, but message was stored successfully');
  } else {
    // Handle other types of errors
    throw error;
  }
}

// Good: Configure appropriate timeouts and retries
const config = {
  // ... other config
  settings: {
    retry: {
      maxAttempts: 5,        // More retries for production
      backoffMs: 2000        // Longer backoff for stability
    }
  }
};

3. Message Design

// Good: Structured messages with metadata
await queue.enqueue(JSON.stringify({ 
  action: 'process-order',
  orderId: '12345',
  timestamp: new Date().toISOString()
}), {
  metadata: { 
    type: 'order-processing',
    priority: 'high',
    source: 'web-app'
  }
});

2. Batch Processing

// Good: Process messages in batches for better throughput
const batch = await queue.dequeueBatch({ maxMessages: 10 });
const processedMessages = [];

for (const message of batch.messages) {
  try {
    await processMessage(message);
    processedMessages.push(message);
  } catch (error) {
    console.error('Failed to process message:', message.id, error);
    // Don't add to processedMessages to avoid acknowledgment
  }
}

// Only acknowledge successfully processed messages
if (processedMessages.length > 0) {
  await queue.acknowledgeBatch(processedMessages);
}

3. Error Handling and Retries

// Good: Implement exponential backoff for retries
async function processWithRetry(message: QueueMessage, maxRetries = 3) {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      await processMessage(message);
      return await queue.acknowledge(message);
    } catch (error) {
      if (attempt === maxRetries) {
        console.error('Max retries exceeded for message:', message.id);
        // Handle poison message (move to dead letter queue, etc.)
        return;
      }
      
      const backoffMs = Math.pow(2, attempt) * 1000;
      await new Promise(resolve => setTimeout(resolve, backoffMs));
    }
  }
}

4. Resource Management

// Good: Proper cleanup
const queue = new AzureCQ(config);

process.on('SIGINT', async () => {
  console.log('Shutting down gracefully...');
  await queue.shutdown();
  process.exit(0);
});

try {
  await queue.initialize();
  // Your application logic here
} catch (error) {
  console.error('Failed to start:', error);
  await queue.shutdown();
  process.exit(1);
}

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙋‍♂️ Support


AzureCQ - Built for performance, designed for scale, optimized for cost.