ink-visual-testing
v1.0.1
Published
Visual regression testing for Ink components
Downloads
100
Maintainers
Readme
Ink Visual Testing
Visual regression testing for Ink CLI applications with perfect emoji support.
Features
Visual regression testing helps detect unexpected changes in your UI:
- Prevent Layout Issues - Automatically detect border, alignment, and spacing problems
- Validate Dynamic Rendering - Ensure UI displays correctly with different data
- Catch Style Changes - Detect unexpected color, font, and style modifications
- Multi-State Testing - Test loading, error, empty states, and more
- CI/CD Integration - Automatically catch visual bugs before merging
Installation
npm install ink-visual-testing --save-devNote: This library has peer dependencies on ink and react. Make sure they're installed in your project:
# For Ink v4/v5 (React 18)
npm install ink@^4.0.0 react@^18.0.0
# For Ink v6 (React 19)
npm install ink@^6.0.0 react@^19.0.0Supported versions:
- Ink: v4.x, v5.x, v6.x
- React: v18.x, v19.x
Usage
Quick Start
There are two ways to use ink-visual-testing:
- CLI Tool (Recommended) - Simple command-line testing
- Programmatic API - Full control within test files
CLI Tool
Initialize Configuration
npx ink-visual-testing initThis creates a .ink-visual.config.js file with default settings:
// .ink-visual.config.js
export default {
preset: 'standard', // Terminal size preset
maxDiffPixels: 100, // Allowed pixel differences
threshold: 0.1, // Pixel difference threshold
backgroundColor: '#000000', // Terminal background
updateBaseline: true // Auto-create baselines
};Run Tests
# Test single file
npx ink-visual-testing test examples/my-app.tsx
# Test with preset
npx ink-visual-testing test --preset wide examples/my-app.tsx
# Batch test multiple files
npx ink-visual-testing test --batch "examples/*.tsx"
# Custom terminal size
npx ink-visual-testing test --cols 100 --rows 30 examples/my-app.tsxAvailable Presets
npx ink-visual-testing presetsStandard sizes:
tiny(40×15) - For minimal componentsnarrow(60×20) - For constrained environmentsstandard(80×24) - Classic default sizewide(120×40) - For modern developmentultra-wide(160×50) - For large displays
CI/Development:
ci(100×30) - Optimized for automationci-narrow(80×25) - For resource-constrained CI
Specialized:
mobile(50×20) - For mobile terminal appstest-small(60×15) - For focused component testingtest-large(140×45) - For comprehensive UI testing
Programmatic API
1. Basic Usage
import { describe, it } from 'vitest';
import React from 'react';
import { Box, Text } from 'ink';
import { visualTest } from 'ink-visual-testing';
// Your Ink component
const Greeting = ({ name, message }) => (
<Box borderStyle="round" borderColor="cyan" padding={1}>
<Text>Hello, <Text bold color="green">{name}</Text>!</Text>
<Text dimColor>{message}</Text>
</Box>
);
describe('Greeting', () => {
it('should render correctly', async () => {
// Mock data for the component
const mockData = {
name: 'Alice',
message: 'Welcome to Ink Visual Testing'
};
// One line to create a visual test
await visualTest('greeting', <Greeting {...mockData} />);
});
});First run: Automatically generates baseline image at tests/__baselines__/greeting.png
Subsequent runs: Compares current output with baseline, failing if differences detected
2. Configuration Options
await visualTest(
'component-name', // Snapshot name
<MyComponent />, // React component
{
cols: 80, // Terminal width (default: 80)
rows: 24, // Terminal height (default: 24)
maxDiffPixels: 100, // Max allowed pixel difference (default: 100)
threshold: 0.1, // Pixel diff threshold 0-1 (default: 0.1)
backgroundColor: '#000000' // Background color (default: black)
}
);3. Testing Different States
describe('Dashboard', () => {
it('loading state', async () => {
await visualTest('dashboard-loading', <Dashboard loading={true} />);
});
it('loaded state', async () => {
const mockData = { users: 100, sales: 5000 };
await visualTest('dashboard-loaded', <Dashboard data={mockData} />);
});
it('error state', async () => {
await visualTest('dashboard-error', <Dashboard error="Network Error" />);
});
});4. Testing with Presets
it('different terminal sizes', async () => {
const mockData = { /* ... */ };
// Using presets
await visualTest('small', <MyApp data={mockData} />, {
preset: 'tiny'
});
await visualTest('large', <MyApp data={mockData} />, {
preset: 'wide'
});
// Override preset with custom size
await visualTest('custom', <MyApp data={mockData} />, {
preset: 'standard',
rows: 50 // Override preset's default rows
});
});5. Batch Testing
import { batchVisualTest } from 'ink-visual-testing';
it('batch test multiple components', async () => {
const results = await batchVisualTest([
{
name: 'header',
componentOrPath: <Header title="Test" />,
options: { preset: 'standard' }
},
{
name: 'footer',
componentOrPath: <Footer />,
options: { preset: 'narrow' }
}
]);
// Check results
const failed = results.filter(r => !r.passed);
expect(failed).toHaveLength(0);
});6. Updating Baselines
When you have intentional UI changes, update baselines:
# Run tests
npm test
# Review the new generated images
open tests/__output__/*.png
# If correct, update baselines
cp tests/__output__/*.png tests/__baselines__/
# Commit the updates
git add tests/__baselines__/
git commit -m "Update visual baselines"Or use an npm script:
{
"scripts": {
"test": "vitest",
"baseline:update": "cp tests/__output__/*.png tests/__baselines__/"
}
}6. Understanding Test Failures
When visual tests fail, it means the UI output differs from the baseline. This catches both intentional and unintentional changes.
Example: Detecting Unintended Changes
Consider a SettingsDialog where someone accidentally modified settings values:
// Original fixture (created baseline)
const mockSettings = {
general: { vimMode: true }
};
// Modified fixture (someone changed vimMode to false and added settings)
const mockSettings = {
general: {
vimMode: false, // ← Changed!
disableAutoUpdate: true, // ← Added!
debugKeystrokeLogging: true // ← Added!
}
};Test output:
$ npm test
FAIL tests/SettingsDialog.visual.test.tsx
× should render default settings dialog
→ PNG diff exceeded tolerance: 777 pixels differ (allowed 500).
Diff saved to tests/__diff__/settings-dialog-default.png
Test Files 1 failed (1)What happens:
- Baseline image: Shows original UI with
vimMode: true - Current output: Shows modified UI with
vimMode: falseand extra settings - Diff image: Highlights the differences in yellow/orange:
- "Disable Auto Update (Modified in System)" - highlighted
- "Debug Keystroke Logging (Modified in System)" - highlighted
The test fails with a clear message: 777 pixels differ (allowed 500), pinpointing exactly how many pixels changed.
How to resolve:
If change was unintentional (bug):
# Fix the code to match the baseline git diff src/components/SettingsDialog.tsx # Revert the changesIf change was intentional (feature):
# Review the diff image open tests/__diff__/settings-dialog-default.png # If correct, update baseline cp tests/__output__/settings-dialog-default.png tests/__baselines__/ # Commit the new baseline git add tests/__baselines__/ git commit -m "Update baseline after adding new settings"
Key insight: The diff image makes it obvious what changed, helping you decide if it's a bug or an intentional improvement!
Real Failure Examples
Let's examine two concrete scenarios where visual tests catch problems:
Example 1: Content Difference - Text Changed
Someone modified the file count in a success message:
// Baseline fixture
<Text dimColor>
Files processed: 10
</Text>
// Modified fixture (accidentally changed)
<Text dimColor>
Files processed: 25 (DIFFERENT!)
</Text>Test output:
$ npm test
FAIL tests/VisualFailure.demo.test.tsx
× should detect content difference (extra text)
→ PNG diff exceeded tolerance: 299 pixels differ (allowed 100).
Diff saved to tests/__diff__/message-box-content.pngVisual comparison:
- Baseline: Shows "Files processed: 10"
- Output: Shows "Files processed: 25 (DIFFERENT!)"
- Diff: Highlights "25 (DIFFERENT!)" in red/yellow - clearly showing the text change
299 pixels changed because the characters "25 (DIFFERENT!)" replaced "10", affecting approximately that many pixels.
Example 2: Color Difference - Styling Changed
Someone changed the color of a status indicator from yellow to cyan:
// Baseline fixture
<Text bold color="yellow">
Active
</Text>
// Modified fixture (color accidentally changed)
<Text bold color="cyan">
Active
</Text>Test output:
$ npm test
FAIL tests/VisualFailure.demo.test.tsx
× should detect color difference (cyan vs yellow)
→ PNG diff exceeded tolerance: 178 pixels differ (allowed 50).
Diff saved to tests/__diff__/status-box-color.pngVisual comparison:
- Baseline: "Active" rendered in yellow color
- Output: "Active" rendered in cyan color
- Diff: Highlights the entire word "Active" in red - showing every pixel of the word changed color
178 pixels changed because each character pixel in "Active" has a different RGB value (yellow vs cyan).
Understanding Diff Colors
The diff image uses these colors:
- Gray background: Identical pixels (darkened for contrast)
- Yellow/Orange: Small differences (anti-aliasing, slight shifts)
- Red: Significant differences (different text, different colors)
More red/yellow = more pixels changed = larger visual difference
Best Practices
Key Points
Use Fixed Mock Data
// Good: Fixed data const mockData = { timestamp: '2024-01-15 10:30:00', count: 42 }; // Bad: Dynamic data (different every time) const mockData = { timestamp: new Date().toISOString(), count: Math.random() };Create Separate Tests for Each State
// Good: Separate tests it('empty state', () => visualTest('empty', <List items={[]} />)); it('with data', () => visualTest('with-data', <List items={mock} />)); // Bad: Reusing names it('list', () => { visualTest('list', <List items={[]} />); visualTest('list', <List items={mock} />); // Name conflict! });Set Appropriate Tolerance
- Static content (logos, icons):
maxDiffPixels: 0(strict) - Simple layouts:
maxDiffPixels: 100(default) - Complex layouts:
maxDiffPixels: 500(lenient)
- Static content (logos, icons):
Ignore Generated Files
# .gitignore tests/__output__/ # Test output tests/__diff__/ # Diff images tests/__temp__/ # Temporary files # Baseline images should be committed !tests/__baselines__/
CI/CD Configuration
# .github/workflows/test.yml
name: Visual Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y libnss3 libatk1.0-0 libgbm1
- run: npm ci
- run: npm test
- name: Upload diff images on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: visual-diffs
path: tests/__diff__/*.pngKeep Snapshot Dimensions Stable
- Measure once, reuse everywhere – Run your render script manually and capture the PTY output to learn the exact character size of the UI. For example:
Use the reportednpx tsx render-settings-dialog.tsx > captured-pty-data.txt node -e "import fs from 'fs'; import strip from 'strip-ansi'; const lines = strip(fs.readFileSync('captured-pty-data.txt','utf8')).split('\\n'); console.log({ cols: Math.max(...lines.map(l => l.length)), rows: lines.length });"cols/rowsas your minimum terminal size. - Share a snapshot config – Define a single object that freezes every dimension-sensitive option (
cols,rows,margin,fontFamily,emojiFontKey) and reuse it in all tests and in CI:
Then spreadconst sharedSnapshotConfig = { ...getCIOptimizedConfig({ baseFont: 'bundled', emojiFontKey: 'system', }), cols: 80, rows: 24, margin: 12, } satisfies Partial<NodePtySnapshotOptions>;sharedSnapshotConfiginto everycreateSnapshotFromPtycall. - Controlled baseline updates – Mirror Vitest’s snapshot flow: respect the
--updateflag (orprocess.env.UPDATE_SNAPSHOT) to decide when to write intotests/__screenshots__/<test-file>/, and otherwise emit fresh renders intotests/__visual_output__for diffing. Provide a manual override env (e.g.UPDATE_BASELINES=1) only as a fallback for local scripts. - Document overrides – If a test needs a different size (e.g. small vs. large terminal), record the reasoning in code comments and keep those values constant between baseline generation and CI runs.
- Optional trims – If you want to remove surrounding padding, pass
trimTransparentBorder: true; otherwise keep the same margin everywhere so that the bitmap remains identical across runs.
API Reference
CLI Commands
# Initialize configuration
npx ink-visual-testing init
# Run tests
npx ink-visual-testing test [options] <testFiles...>
# List presets
npx ink-visual-testing presets
# Help
npx ink-visual-testing --helpTest Options:
--preset <name>- Terminal preset (tiny, standard, wide, etc.)--cols <number>- Terminal columns--rows <number>- Terminal rows--config <file>- Configuration file path--update-baseline- Update baseline if missing--batch- Batch mode for better performance
Programmatic API
Core Functions
import {
visualTest,
batchVisualTest,
batchVisualTestFromFiles,
loadConfig,
getTerminalPreset
} from 'ink-visual-testing';
// Basic visual test
await visualTest(name, componentOrPath, options?);
// Batch testing
const results = await batchVisualTest(testCases);
// Batch from file patterns
const results = await batchVisualTestFromFiles(['tests/*.tsx']);
// Load configuration
const config = await loadConfig();
// Get preset details
const preset = getTerminalPreset('wide');Types
interface VisualTestOptions {
preset?: string; // Terminal preset name
cols?: number; // Terminal columns
rows?: number; // Terminal rows
maxDiffPixels?: number; // Max allowed diff pixels
threshold?: number; // Pixel difference threshold 0-1
backgroundColor?: string; // Background color
updateBaseline?: boolean; // Auto-create baselines
}
interface BatchTestCase {
name: string;
componentOrPath: React.ReactElement | string;
options?: VisualTestOptions;
}
interface BatchTestResult {
name: string;
passed: boolean;
error?: string;
duration: number;
}
interface TerminalPreset {
cols: number;
rows: number;
description: string;
}Configuration Files
Configuration is automatically loaded from:
.ink-visual.config.js.ink-visual.config.mjs.lokirc.lokirc.json
// .ink-visual.config.js
export default {
preset: 'standard',
maxDiffPixels: 100,
threshold: 0.1,
backgroundColor: '#000000',
updateBaseline: true
};Available Presets
import { TERMINAL_PRESETS, listTerminalPresets } from 'ink-visual-testing';
// Get all presets
const presets = listTerminalPresets();
// Access preset definitions
const standardPreset = TERMINAL_PRESETS.standard; // { cols: 80, rows: 24, description: "..." }Project Structure
Recommended directory structure:
your-project/
├── src/
│ └── components/
│ └── MyComponent.tsx # Your Ink component
├── tests/
│ ├── simple-box-auto.test.ts # Visual test entry
│ ├── utils/ # Shared helpers (diffing, snapshots, etc.)
│ │ ├── imageDiff.ts
│ │ └── visualSnapshot.ts
│ ├── vitest.setup.ts # Test environment setup
│ ├── __baselines__/ # Baseline images (commit to Git)
│ │ ├── my-component.png
│ │ └── my-component-loading.png
│ ├── __output__/ # Test output (Git ignore)
│ └── __diff__/ # Diff images (Git ignore)
└── package.jsonMaintenance & Long-term Benefits
Initial Setup Effort
The effort required depends on component complexity:
Simple Components (No Context Dependencies)
Example: Basic message box without Context Providers
// tests/fixtures/simple-message.tsx
import React from 'react';
import { render, Box, Text } from 'ink';
render(
<Box borderStyle="round" padding={1}>
<Text>Success!</Text>
</Box>
);// tests/SimpleMessage.visual.test.tsx
import { visualTest } from 'ink-visual-testing';
it('renders simple message', async () => {
await visualTest('simple-message', './tests/fixtures/simple-message.tsx', {
cols: 80, rows: 20
});
});Effort: ~33 lines of code
Complex Components (With Context Providers)
Example: Settings dialog with VimModeProvider and KeypressProvider
// tests/fixtures/settings-dialog-default.tsx
import { useEffect } from 'react';
import { render } from 'ink';
import { SettingsDialog } from '../../src/components/SettingsDialog.js';
import { VimModeProvider } from '../../src/contexts/VimModeContext.js';
import { KeypressProvider } from '../../src/contexts/KeypressContext.js';
const mockSettings = createMockSettings({
general: { vimMode: true }
});
const SettingsDialogWrapper = () => {
useEffect(() => {
const timer = setTimeout(() => process.exit(0), 1000);
return () => clearTimeout(timer);
}, []);
return <SettingsDialog settings={mockSettings} onSelect={() => {}} />;
};
const { unmount } = render(
<VimModeProvider settings={mockSettings}>
<KeypressProvider kittyProtocolEnabled={false}>
<SettingsDialogWrapper />
</KeypressProvider>
</VimModeProvider>
);
process.on('SIGINT', () => {
unmount();
process.exit(0);
});Effort for 7 test scenarios: ~483 lines of code
But this is a one-time investment with continuous benefits!
Maintenance Cost When Components Change
| Change Type | What to Update | Frequency | |------------|----------------|-----------| | Add new option/feature | Only baseline images | Common (80%) | | Style/layout changes | Only baseline images | Common | | Add new test scenario | 1 new fixture file | Occasional | | Modify component props | Batch find-replace in fixtures | Rare | | Change Context structure | All fixture files | Very rare |
Key insight: Most changes (80%) only require updating baseline images—typically just a quick review before approving visual changes!
Why "One-time Setup, Continuous Benefits"?
1. Test Files Never Change
The test logic remains stable:
await visualTest('component-name', './fixtures/component.tsx', options);You only add new tests when adding new scenarios.
2. Fixtures Rarely Change
Most UI changes don't require fixture modifications:
- Adding new UI elements: No fixture changes needed
- Changing colors/styles: No fixture changes needed
- Modifying layout: No fixture changes needed
- Changing component API: Quick batch find-replace
- Adding Context Providers: One-time update to all fixtures
3. Baseline Updates Are Fast
# Run tests to see what changed
npm test
# If changes look correct, update baselines
cp tests/__output__/*.png tests/__baselines__/
# Commit the updates
git add tests/__baselines__/
git commit -m "Update baselines after adding dark mode"This takes 1-2 minutes vs. 15+ minutes of manual testing!
4. Automated Regression Detection
Once set up, every PR automatically checks for visual regressions:
- Catches unintended UI changes
- Prevents layout bugs
- Ensures consistent rendering across environments
- No manual testing needed
Tips for Minimizing Maintenance
1. Reuse Mock Data
// tests/mocks/settings.ts
export const defaultSettings = createMockSettings({ ... });
export const vimEnabledSettings = createMockSettings({ general: { vimMode: true } });
// In fixtures, just import and use
import { defaultSettings } from '../mocks/settings';2. Use a Fixture Template Create a base fixture and copy/modify for new scenarios:
// tests/fixtures/_template.tsx
import { useEffect } from 'react';
import { render } from 'ink';
import { MyComponent } from '../../src/MyComponent';
import { AllNecessaryProviders } from './providers';
const FixtureWrapper = () => {
useEffect(() => {
const timer = setTimeout(() => process.exit(0), 1000);
return () => clearTimeout(timer);
}, []);
return <MyComponent {...mockProps} />;
};
render(
<AllNecessaryProviders>
<FixtureWrapper />
</AllNecessaryProviders>
);3. Automate Baseline Updates
{
"scripts": {
"test:visual": "vitest --run tests/*.visual.test.tsx",
"baseline:update": "cp tests/__output__/*.png tests/__baselines__/",
"baseline:review": "open tests/__output__/*.png"
}
}Advanced Usage
Lower-Level API
If you need more control, use the lower-level API:
import { fixedPtyRender, getCIOptimizedConfig } from 'ink-visual-testing';
import path from 'node:path';
// Render CLI app to PNG
await fixedPtyRender(
path.resolve('examples/my-cli.tsx'),
'output.png',
{
...getCIOptimizedConfig(),
cols: 120,
rows: 60
}
);Font Configuration
Uses bundled DejaVu Sans Mono by default so snapshots look the same locally and in CI. To adjust emoji or base fonts:
import { getCIOptimizedConfig } from 'ink-visual-testing';
// Recommended: Use bundled fonts for cross-platform consistency
getCIOptimizedConfig(); // Default: bundled Noto emoji + DejaVu Sans Mono
getCIOptimizedConfig('noto'); // Explicit: color vector emoji (same as default)
getCIOptimizedConfig('color'); // Alternative: color bitmap emoji
// Use system fonts (may differ across Mac/Linux/Windows)
getCIOptimizedConfig({ emojiFontKey: 'system', baseFont: 'system' });Important for Cross-Platform Consistency:
- Default (
'noto'): Uses bundled NotoEmoji-Regular.ttf (color vector emoji) + DejaVu Sans Mono. Ensures identical rendering on Mac, Linux, Windows, and CI.- System fonts: Mac uses Apple Color Emoji, Linux uses Noto Color Emoji, Windows uses Segoe UI Emoji. Will produce different baselines on different platforms.
- Recommendation: Always use bundled fonts (
getCIOptimizedConfig()orgetCIOptimizedConfig('noto')) to ensure your local baselines match CI output.
Bundled fonts
- DejaVu Sans Mono (Bitstream Vera License)
- NotoColorEmoji (SIL Open Font License)
- NotoEmoji-Regular (SIL Open Font License)
- TwemojiMozilla (Mozilla Public License)
- GNU Unifont (GNU GPLv2 with font embedding exception)
Examples
Quick Start Examples
See examples/ directory for basic examples:
examples/dashboard.tsx- Complex dashboard layout with emoji and various layoutsexamples/dashboard-cli.tsx- CLI entry pointexamples/dashboard-snapshot.tsx- Snapshot generation script
Run examples:
# View live rendering
npx tsx examples/dashboard-cli.tsx
# Generate snapshot
npx tsx examples/dashboard-snapshot.tsxReal-World Integration Example
For a complete, production-ready example showing how to integrate ink-visual-testing into an existing project with complex components, see:
This example demonstrates:
- ✅ Testing a complex Settings Dialog component with 6 test cases
- ✅ Handling Context Providers (VimModeProvider, KeypressProvider)
- ✅ Creating reusable test helpers
- ✅ Proper mock data structure for stateful components
- ✅ Testing different terminal sizes and states
- ✅ WSL/CI compatibility best practices
- ✅ Complete troubleshooting guide
Perfect for understanding how to use ink-visual-testing in real projects!
License
MIT License. See LICENSE for details.
Contributing
Contributions welcome! Please check GitHub Issues.
