perfocode
v1.6.1
Published
Performance checker
Maintainers
Readme
perfocode is a visual performance testing tool for comparing code execution speed across runs.
- Visual — Color-coded performance deltas with progress bars
- Comparative — Shows min, max, average, and current values vs previous runs
- Configurable — Environment variables and options for customization
- Type-safe — Full TypeScript support
- Zero-dependency — Only requires
chalkfor terminal colors - GC-aware — Optional garbage collection control for accurate results
- Interactive — Save results to JSON for future comparisons
- Flexible — Support for
useBeforeanduseAfterhooks for setup/teardown operations
Use it to benchmark getters vs methods, optimize algorithms, or track performance regressions.
Index
[ Install ]
[ Usage ] Basic example • Compare mode • Visual output
[ API ] perfocode • describe • test
[ Configuration ] Environment variables • Options • Limits • Columns
[ TypeScript ]
[ Issues ]
Install
🏠︎ / Install ↓
npm
npm i perfocode -Dyarn
yarn add perfocode -DUsage
🏠︎ / Usage ↑ ↓
Basic example • Compare mode • Visual output
Basic example
🏠︎ / Usage / Basic example ↓
Create a test file to compare performance of different approaches.
const { perfocode, describe, test } = require('perfocode')
perfocode('output-file', () => {
describe('getters vs methods', () => {
class GetterVsMethod {
constructor () {
this._value = 0
}
get value () {
return this._value
}
getValue () {
return this._value
}
}
const getterVsMethod = new GetterVsMethod()
test('getter', () => getterVsMethod.value)
test('method', () => getterVsMethod.getValue())
})
})Run the file:
node index.jsOn first run, you'll see current results only:
Compare with [output-file]:
╒ getters vs methods
│┌──────────┬────────────┬──────────────┬──────────────────────┬───────┬──────────────────────────────────────┬──────┐
││ ✔ getter │ 38806.0133 │ Σ 38806.0133 │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰ │ 100% │ 38806.0133 → 38806.0133 ← 38806.0133 │ Δ 0% │
││ ✔ method │ 36652.7467 │ Σ 36652.7467 │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▱ │ 94.5% │ 36652.7467 → 36652.7467 ← 36652.7467 │ Δ 0% │
│└──────────┴────────────┴──────────────┴──────────────────────┴───────┴──────────────────────────────────────┴──────┘
╘ getters vs methods: getter [38806.0133]
Do you want to save results? [Y/n]: Press enter to save results to output-file.json.
Compare mode
🏠︎ / Usage / Compare mode ↑ ↓
Run the test again to see the difference between current and previous results:
Compare with [output-file]:
╒ getters vs methods
│┌──────────┬───────────────────┬─────────────────────┬──────────────────────┬───────┬────────────────────────────────────────────┬─────────┐
││ ✔ method │ 42809.78 ↑ 16.8% │ Σ 39731.2633 ↑ 8.4% │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰ │ 100% │ 36652.7467 → 39731.2633 ← 42809.78 ↑ 16.8% │ Δ 15.5% │
││ ✔ getter │ 38911.1767 ↑ 0.3% │ Σ 38858.595 ↑ 0.1% │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰ │ 97.8% │ 38806.0133 → 38858.595 ← 38911.1767 ↑ 0.3% │ Δ 0.3% │
│└──────────┴───────────────────┴─────────────────────┴──────────────────────┴───────┴────────────────────────────────────────────┴─────────┘
╘ getters vs methods: method [39731.2633]
Do you want to save results? [Y/n]: Visual output
🏠︎ / Usage / Visual output ↑
Modify the test to see performance changes:
const { perfocode, describe, test } = require('perfocode')
perfocode('output-file', () => {
describe('getters vs methods', () => {
class GetterVsMethod {
constructor () {
this._value = 0
}
get value () {
for (let i = 0; i < 1000; i++) {}
return this._value
}
getValue () {
return this._value
}
}
const getterVsMethod = new GetterVsMethod()
test('getter', () => getterVsMethod.value)
test('method', () => getterVsMethod.getValue())
})
})Performance degradation:
Compare with [output-file]:
╒ getters vs methods
│┌──────────┬───────────────────┬──────────────────────┬──────────────────────┬───────┬─────────────────────────────────────────────┬─────────────────┐
││ ✔ method │ 43488.9367 ↑ 9.5% │ Σ 41610.1 ↑ 4.7% │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰ │ 100% │ 36652.7467 → 40070.8417 ← 43488.9367 ↑ 1.6% │ Δ 17.1% ↑ 10.1% │
││ ✔ getter │ 2598.4933 ↓ 93.3% │ Σ 20728.5442 ↓ 46.7% │ ▰▰▰▰▰▰▰▰▰▰▱▱▱▱▱▱▱▱▱▱ │ 49.8% │ 2598.4933 → 20754.835 ← 38911.1767 ↓ 93.3% │ Δ 175% ↑ 64549% │
│└──────────┴───────────────────┴──────────────────────┴──────────────────────┴───────┴─────────────────────────────────────────────┴─────────────────┘
╘ getters vs methods: method [41610.1]
Do you want to save results? [Y/n]: API
🏠︎ / API ↑ ↓
perfocode
🏠︎ / API / perfocode ↓
Main function that wraps test suites. Accepts output filename, callback, and optional timeout/options.
perfocode(output: string, callback: Callback, timeout?: TimeoutOption): void
perfocode(params: PerfocodeParams): voidArguments:
output— Filename for saving results (without.jsonextension)callback— Function containingdescribeandtestcallstimeout— Timeout in ms (default:300) or options object
Alternatively, pass a single object with output, call required fields, and other general options.
Example:
perfocode('benchmark', () => {
describe('my tests', () => {
test('fast', () => { /* ... */ })
test('slow', () => { /* ... */ })
})
}, 500)
// Or using params object
perfocode({
output: 'benchmark',
timeout: 500,
preventGC: true,
call() {
describe({
name: 'my tests',
useAfter: true,
call () {
test('fast', () => { /* ... */ })
test('slow', () => { /* ... */ })
}
})
},
})describe
🏠︎ / API / describe ↑ ↓
Groups related tests together. Displays as a section in output.
describe(name: string, callback: Callback, timeout?: TimeoutOption)
describe(params: DescribeParams)Arguments:
name— Section name for displaycallback— Function containingtestcallstimeout— Timeout in ms or options object
Alternatively, pass a single object with name, call required fields, and other general options.
Example:
describe('array methods', () => {
test('forEach', () => { /* ... */ })
test('map', () => { /* ... */ })
})
// Or using params object
describe({
name: 'array methods',
timeout: 1000,
logging: true,
call () {
test('forEach', () => { /* ... */ })
},
})test
🏠︎ / API / test ↑
Single performance test. Executes callback and measures operations per millisecond.
test(name: string, callback: Callback, options?: TestOptions | number)
test(params: TestParams)Arguments:
name— Test name for displaycallback— Function to benchmarkoptions— Timeout in ms or options object withtimeout,highlight,useBefore, anduseAfter
Alternatively, pass a single object with name, call required fields, and other general options.
Example:
test('loop', () => {
for (let i = 0; i < 1000; i++) {}
})
test('highlighted', () => {
// Important test
}, { timeout: 1000, highlight: true })
// Or using params object
test({
name: 'with setup',
useBefore: true,
timeout: 500,
test () {
const data = prepareData()
return () => data.process()
},
})Using useBefore and useAfter:
These options allow pre- and post-execution hooks around the tested callback, useful for setup/teardown operations that shouldn't be included in the measurement.
// useBefore: callback returns a function to execute in the benchmark loop
test('with setup', () => {
const data = prepareData() // Setup (not measured)
return () => data.process() // Benchmark this
}, { useBefore: true })
// useAfter: cleanup function called after each benchmark iteration
test('with cleanup', () => {
const data = prepareData()
return () => {
data.process()
return () => data.destroy() // Cleanup after each iteration
}
}, { useBefore: true, useAfter: true })Configuration
🏠︎ / Configuration ↑ ↓
Environment variables • Options • Limits • Columns
Environment variables
🏠︎ / Configuration / Environment variables ↓
Customize output and behavior via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| PERFOCODE_PROGRESS_ICON | ▰ | Icon for progress bar filled part |
| PERFOCODE_PROGRESS_END_ICON | ▱ | Icon for progress bar empty part |
| PERFOCODE_SUCCESS_STATUS_ICON | ✔ | Icon for successful test |
| PERFOCODE_ERROR_STATUS_ICON | ✘ | Icon for failed test |
| PERFOCODE_WARNING_STATUS_ICON | ⚠ | Icon for warning |
| PERFOCODE_DELTA_ICON | Δ | Icon for delta values |
| PERFOCODE_TIMEOUT | 300 | Default timeout in ms |
| PERFOCODE_THROW_ERROR | false | Throw errors instead of catching |
| PERFOCODE_PREVENT_GC | false | Prevent garbage collection |
| PERFOCODE_NO_ASK | false | Skip interactive prompts |
| PERFOCODE_LOGGING | false | Enable detailed logging |
| PERFOCODE_LIMITS | {} | Custom limits as JSON |
| PERFOCODE_COLUMNS | (default) | Custom column layout |
Example:
PERFOCODE_TIMEOUT=1000 PERFOCODE_NO_ASK=true node index.jsOptions
🏠︎ / Configuration / Options ↑ ↓
Pass options as third argument to perfocode, describe, or test, or use the params object syntax:
interface Options {
timeout?: number // Timeout in ms
throwError?: boolean // Throw errors instead of catching
preventGC?: boolean // Prevent garbage collection
noAsk?: boolean // Skip interactive prompts
logging?: boolean // Enable detailed logging
limits?: Limits // Custom limits
columns?: string[] // Custom column layout
// Only for test()
highlight?: boolean // Highlight this test in output
useBefore?: boolean // Callback returns function to execute in benchmark loop
useAfter?: boolean // Call returned function after each benchmark iteration
}Example:
// Using options as third argument
test('critical', () => {
// Important benchmark
}, {
timeout: 1000,
highlight: true
})
// Using params object (all functions support this syntax)
perfocode({
output: 'results',
call: () => {
describe('tests', () => {
test({
name: 'with hooks',
test: () => {
const setup = prepare()
return () => {
run(setup)
cleanup(setup)
}
},
useBefore: true,
useAfter: true
})
})
},
preventGC: true
})Limits
🏠︎ / Configuration / Limits ↑ ↓
Define thresholds for color-coding performance:
interface Limits {
delta: Limit // Delta (variation) thresholds
deltaDelta: Limit // Delta change thresholds
progress: Limit // Progress bar thresholds
valueDelta: Limit // Value change thresholds
currentDelta: Limit // Current vs previous thresholds
minDelta: Limit // Min change thresholds
maxDelta: Limit // Max change thresholds
}
interface Limit {
invert?: boolean // Invert color logic
awesome: number // Best performance
great: number
good: number
normal: number
poor: number
bad: number
critical: number // Worst performance
}Default limits:
{
delta: { awesome: 0, great: 0, good: 0, normal: 0, poor: -10, bad: -20, critical: -30 },
deltaDelta: { invert: true, awesome: 0, great: 0, good: 0, normal: 0, poor: 5, bad: 50, critical: 75 },
progress: { awesome: 90, great: 75, good: 50, normal: 50, poor: 50, bad: 25, critical: 10 },
valueDelta: { awesome: 15, great: 10, good: 5, normal: 5, poor: -5, bad: -10, critical: -15 },
currentDelta: { awesome: 15, great: 10, good: 5, normal: 5, poor: -5, bad: -10, critical: -15 },
minDelta: { awesome: 15, great: 10, good: 5, normal: 5, poor: -5, bad: -10, critical: -15 },
maxDelta: { awesome: 15, great: 10, good: 5, normal: 5, poor: -5, bad: -10, critical: -15 }
}Columns
🏠︎ / Configuration / Columns ↑ ↓
Customize output table columns using placeholders:
Default columns:
[
'{statusIcon} {name}',
'{current}{currentDelta}',
'Σ {value}{valueDelta}',
'{progressStart}{progressEnd}',
'{progressDelta}',
'{min} → {average} ← {max}{minDelta}{maxDelta}',
'{deltaIcon} {delta}{deltaDelta}',
]Placeholders:
{name}— Test name{current}— Current value{value}— Average value{min},{max}— Min/max values{average}— Average value{prev},{prevMin},{prevMax},{prevAverage}— Previous values{delta},{prevDelta},{deltaDelta}— Delta values{valueDelta},{currentDelta},{minDelta},{maxDelta}— Value changes{progressStart},{progressEnd}— Progress bar{statusIcon},{deltaIcon},{prevDeltaIcon}— Icons
Example:
PERFOCODE_COLUMNS="{name} | {value} | {delta}" node index.jsOutput:
Compare with [output-file]:
╒ getters vs methods
│┌────────┬────────────┬──────────────────────┐
││ method │ 42604.9383 │ ▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰▰ │
││ getter │ 11676.1821 │ ▰▰▰▰▰▰ │
│└────────┴────────────┴──────────────────────┘
╘ getters vs methods: method [42604.9383]
Do you want to save results? [Y/n]: TypeScript
🏠︎ / TypeScript ↑ ↓
Full TypeScript support with type inference:
import { perfocode, describe, test, type Callback } from 'perfocode'
const benchmark: Callback = () => {
describe('performance', () => {
test('fast', () => {
// Optimized code
})
test('slow', () => {
// Unoptimized code
}, 1000)
})
}
perfocode('results', benchmark, { timeout: 500 })Issues
🏠︎ / Issues ↑
If you find a bug or have a suggestion, please file an issue on GitHub.
