chunked-promise
v0.1.0
Published
Chunked async execution
Maintainers
Readme
Install
npm install chunked-promiseUsage
import { q, chunk } from 'chunked-promise'
const tasks = urls.map(url => () => fetch(url))
await q(tasks) // sequential: 1 → 2 → 3
await chunk(tasks, 3) // batched: [1,2,3] → [4,5,6]API
q(fns, opts?)
Queue execution - one at a time.
await q([
() => fetch('/api/1'),
() => fetch('/api/2'),
() => fetch('/api/3')
])chunk(fns, n = 5, opts?)
Chunk execution - n at a time.
await chunk([
() => fetch('/api/1'),
() => fetch('/api/2'),
() => fetch('/api/3'),
() => fetch('/api/4')
], 2) // [1,2] → [3,4]createPool(opts?)
Create a pool with shared rate limiting across multiple calls.
const pool = createPool({ rateLimit: 10 })
pool.chunk(tasks, 5) // uses pool's rate limit
pool.q(moreTasks) // shares same rate limitOptions
Both functions accept an options object:
| Option | Type | Description |
|--------|------|-------------|
| onProgress | function | Called after each task with { done, total, results } |
| signal | AbortSignal | Cancel execution via AbortController |
| timeout | number | Per-task timeout in milliseconds |
| rateLimit | number | Max tasks per second (0 = unlimited) |
Results (Settled Mode)
Results use settled mode (like Promise.allSettled):
const results = await chunk(tasks, 4)
// [
// { status: 'fulfilled', value: result },
// { status: 'rejected', reason: error },
// ]
// Filter successes
const values = results
.filter(r => r.status === 'fulfilled')
.map(r => r.value)
// Filter failures
const errors = results
.filter(r => r.status === 'rejected')
.map(r => r.reason)Progress Callback
await chunk(tasks, 4, {
onProgress: ({ done, total, results }) => {
console.log(`${done}/${total} complete`)
const pct = Math.round((done / total) * 100)
progressBar.style.width = `${pct}%`
}
})Cancellation
import { chunk, AbortError } from 'chunked-promise'
const controller = new AbortController()
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000)
try {
await chunk(tasks, 4, { signal: controller.signal })
} catch (e) {
if (e instanceof AbortError) {
console.log('Cancelled!')
}
}Timeout
import { chunk, TimeoutError } from 'chunked-promise'
const results = await chunk(tasks, 4, { timeout: 3000 })
// Check for timeouts
results.forEach((r, i) => {
if (r.status === 'rejected' && r.reason instanceof TimeoutError) {
console.log(`Task ${i} timed out`)
}
})Rate Limiting
// Max 10 requests per second
await chunk(apiCalls, 4, { rateLimit: 10 })Pool (Shared Rate Limiting)
Use createPool when multiple chunk/q calls need to share a single rate limiter:
import { createPool } from 'chunked-promise'
const api = createPool({ rateLimit: 10 })
// Both share the same 10/s rate limit
await Promise.all([
api.chunk(userTasks, 5),
api.q(adminTasks)
])Without a pool, each call has its own rate limiter. With a pool, the combined throughput respects the shared limit.
Combined Example
import { chunk, AbortError, TimeoutError } from 'chunked-promise'
const controller = new AbortController()
const results = await chunk(tasks, 4, {
signal: controller.signal,
timeout: 5000,
rateLimit: 10,
onProgress: ({ done, total }) => {
console.log(`Progress: ${done}/${total}`)
}
})
const succeeded = results.filter(r => r.status === 'fulfilled').length
const failed = results.filter(r => r.status === 'rejected').length
console.log(`Done: ${succeeded} succeeded, ${failed} failed`)Exports
import {
q, // Queue execution (sequential)
chunk, // Chunk execution (parallel batches)
createPool, // Shared rate limiting pool
AbortError, // Thrown on cancellation
TimeoutError // Thrown on timeout
} from 'chunked-promise'Demo
pnpm devLicense
MIT
