lib-judge
v1.2.11
Published
**JudgeLib** is a scalable, Redis-backed Node.js library for secure and efficient online code execution. Designed for educational and competitive programming platforms, JudgeLib provides distributed execution, language support, and isolated process handli
Downloads
65
Readme
JudgeLib
JudgeLib is a scalable, Redis-backed Node.js library for secure and efficient online code execution. Designed for educational and competitive programming platforms, JudgeLib provides distributed execution, language support, and isolated process handling out of the box.
Official Documentation
Make sure that the workers are running. You can ping the worker here. If using the microservice, hit/open link to ensure the service is active.
Installation
Install JudgeLib in your Node.js project:
npm install lib-judgeGetting Started
Example (ES Modules)
import { judge } from 'lib-judge';
const result = await judge({
codePath: '/path/to/temp/file.py',
ques_name: 'sum of array',
input: '5 1 2 3 4 5 ### 3 1 2 3 ### 2 1 2',
output: '15 ### 6 ### 3',
timeout: 2, // timeout per test case in seconds
sizeout: 64, // max output size in KB
language: 'py' // language code: 'py', 'cpp', 'java'
});
console.log(result);Supported Languages
| Language | Version | Extension |
| -------- | ------- | --------- |
| Python | 3.11 | .py |
| Java | 17 | .java |
| C++ | GCC 11 | .cpp |
How It Works
- Each submission is split into multiple test cases.
- Test cases are pushed into a Redis queue.
- Distributed workers poll the queue and process tasks.
- Code is compiled (if needed), executed, and validated securely.
- Results are aggregated and returned.
Deployment Options
1. NPM Library (Required)
Install this library in your app to submit code for evaluation.
- Splits test cases into batches
- Sends to Redis queue
- Retrieves results back
2a. Free Microservice (Cloud Hosted)
Use our free hosted service on Render:
- 3 worker instances
- Slow cold starts
- Good for testing
2b. Self-Host (Recommended)
Deploy on your infrastructure with Docker & Kubernetes:
- Docker isolation
- Auto-scaling with KEDA
- Production ready
Quick Comparison
| Feature | Free Microservice | Self-Host | | ------------ | ----------------- | --------------- | | Security | Basic | Docker Isolated | | Performance | Slow starts | Fast | | Auto-scaling | Fixed | ✓ KEDA | | Best For | Testing | Production |
Self-hosting gives better security, performance, and auto-scaling. Ideal for production workloads.
Why Use JudgeLib?
- Batch Processing – Divides large sets of test cases into smaller batches for faster execution.
- Redis Integration – Test cases and execution data are stored and managed in Redis for distributed coordination.
- Worker System – Background workers fetch batched test cases from Redis and execute them in isolated environments.
- Language Agnostic – Use JudgeLib from any programming language or framework via simple HTTP requests.
- Horizontal Scaling – Deploy multiple worker instances behind a load balancer.
- Isolated Environment – Run code execution separately from your main application.
Self-Host Prerequisites
Chocolatey Package Manager Follow the official installation guide. (Requires admin privileges)
Kubernetes CLI (kubectl)
choco install kubernetes-cli
kubectl version --client- Kind (Kubernetes IN Docker)
choco install kind
kind version- Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace- Port Forward to Redis (Optional)
kubectl port-forward -n judge-namespace redis-0 6379:6379Self-Host Setup
Step 1: Install NPM Package
npm install lib-judgeStep 2: Environment Configuration
Create a .env file:
password_redis=your_redis_password
host_redis=redis-service.judge-namespace.svc.cluster.local
redis_port=6379Step 3: Deploy Worker Container (Optional)
docker pull lightningsagar/worker:e78c86a716f441816d766f08459ab86ae32f9717Step 4: Setup Kubernetes Operations
git clone https://github.com/lightning-sagar/worker-opsCustomize code as needed.
Step 5: Deploy to Kubernetes
# Create Cluster
kind create cluster --config ./cluster.yml -n workers-clusters
# Create Namespace
kubectl create namespace judge-namespace
# Deploy Workers
kubectl apply -f judge-workersThe HPA configuration automatically scales your worker pods based on CPU usage and request load.
Performance & Scaling
Currently deployed on Render with 3 active workers:
| Metric | Estimate | | ----------------- | ------------------------- | | Uptime | ~98–99% | | Avg Response Time | 0.8–1.5s per test case | | Executions/Day | ~20,000–40,000 test cases |
JudgeLib scales horizontally; adding more workers reduces latency and increases throughput.
Built With
- Node.js
- Redis
- Docker & Kubernetes
- Helm
- Kind (Kubernetes IN Docker)
