@embedthis/testme
v0.8.31
Published
Test runner for system projects
Maintainers
Readme
TestMe - Test Runner for System Projects
TestMe is a specialized test runner designed for core infrastructure projects such as those written in C/C++ and those that require compilation before execution. TestMe discovers, compiles, and executes tests with configurable patterns and parallel execution -- ideal for low-level and performance-critical codebases.
Test files can be written in C, C++, shell scripts, Python, Go or Javascript/Typescript.
Written by AI
This entire program has been designed by humans and written almost 100% by AI with strong human supervision.
🎯 Ideal Use Cases
TestMe is purpose-built for:
- Embedded systems - Cross-platform firmware and IoT device testing
- C/C++/Rust projects - Native compilation with GCC/Clang/MSVC, direct binary execution
- Make/CMake-based projects - Seamless integration with traditional build systems
- Core infrastructure - System-level components, libraries, and low-level tools
- Multi-language tests - Write tests in C, C++, shell scripts, Python, Go or Javascript/Typescript
⚠️ When to Consider Alternatives
For cloud-native or application-level projects written in JavaScript/TypeScript, or Python, Jest or Vitest may be better choices. They provide:
- Superior TypeScript transpilation and module resolution
- Rich ecosystem of plugins, matchers, and integrations
- Advanced watch mode with hot module reload
- Integrated code coverage, snapshot testing, and mocking
- First-class framework support (React, Vue, Angular, Svelte)
TestMe focuses on simplicity and direct execution for system-level projects.
TestMe is under very active development at this time and may be a little unstable. Please report any issues you find and we will try to fix them quickly.
🚀 Features
- Multi-language Support: Shell (
.tst.sh), PowerShell (.tst.ps1), Batch (.tst.bat,.tst.cmd), C (.tst.c), JavaScript (.tst.js), TypeScript (.tst.ts), Python (.tst.py), and Go (.tst.go). - Jest/Vitest-Compatible API: Use familiar
expect()syntax alongside traditional test functions for JavaScript/TypeScript tests - Automatic Compilation: C programs are compiled automatically with platform-appropriate compilers (GCC/Clang/MSVC)
- Cross-platform: Full support for Windows, macOS, and Linux with native test types for each platform
- Recursive Discovery: Finds test files at any depth in directory trees
- Pattern Matching: Filter tests using glob patterns, file names, or directory names
- Parallel Execution: Run tests concurrently for better performance
- Artifact Management: Automatic cleanup of build artifacts after successful tests (preserves on failure)
- Hierarchical Configuration:
testme.json5files with tree traversal lookup - Environment Variables: Dynamic environment setup with glob expansion support
- Test Control: Skip scripts, depth requirements, and enable/disable flags
- Service Health Checks: Active monitoring of service readiness (HTTP, TCP, script, file-based)
- Multiple Output Formats: Simple, detailed, and JSON reporting
- Integrated Debugging: Multi-language debug support with platform-specific debuggers
- C: GDB, LLDB, Xcode, Visual Studio, VS Code, Cursor
- JavaScript/TypeScript: Bun inspector, VS Code, Cursor
- Python: pdb, VS Code, Cursor
- Go: Delve, VS Code, Cursor
📋 Table of Contents
- Installation
- Quick Start
- API Reference
- Usage
- Test File Types
- Configuration
- Common Use Cases
- Output Formats
- Development
- Tips and Best Practices
- Publishing
🔧 Installation
Prerequisites
TestMe requires Bun - a fast JavaScript runtime with built-in TypeScript support, to be installed.
For installation, visit bun.sh for installation instructions.
Ensure the Bun bin directory is in your path.
For Unix/Linux/macOS:
export "PATH=~/.bun/bin:$PATH"on Windows with PowerShell:
setx PATH "$($env:PATH);$env:USERPROFILE\.bun\bin"Quick TestMe Install
You can install TestMe from the npm registry using the following command:
Important: TestMe requires the --trust flag to run postinstall scripts.
# Using Bun (recommended)
bun install -g --trust @embedthis/testme
# Using npm
npm install -g @embedthis/testmeVerify installation:
tm --versionTroubleshooting: If tm command is not found after installation with Bun:
- You may have forgotten the
--trustflag - Run the check script:
node node_modules/@embedthis/testme/bin/check-install.mjs - Or manually install:
cd node_modules/@embedthis/testme && bun bin/install.mjs
Manual Installation from GitHub
Unix/Linux/macOS
Clone or download the TestMe project
Install dependencies:
bun installBuild the project:
bun run buildInstall the project (on MacOS/Linux only):
sudo bun run install
Windows
1. Build and Install
# Install dependencies
bun install
# Build
bun run build🚀 Using TestMe Quick Start
Initialize a New Project
# Create a testme.json5 configuration file
tm --init
# Create test files from templates
tm --new math.c # Creates math.tst.c
tm --new api.js # Creates api.tst.js
tm --new test.sh # Creates test.tst.sh
tm --new types.ts # Creates types.tst.tsManual Setup
Create test files with the appropriate extensions:
# C test echo '#include "testme.h" int main() { teq(2+2, 4, "math works"); return 0; }' > math.tst.c # JavaScript test echo 'console.log("✓ Test passed"); process.exit(0);' > test.tst.jsRun tests:
# Run all tests tm # Run specific tests tm "*.tst.c" # Run tests in a directory tm integration # List available tests tm --listClean up build artifacts:
tm --clean
📚 API Reference
TestMe provides comprehensive testing APIs for C, JavaScript, and TypeScript:
Testing API Documentation
- README-TESTS.md - General test requirements, exit codes, output streams, environment variables
- README-C.md - Complete C testing API reference (
testme.hfunctions) - README-JS.md - Complete JavaScript/TypeScript testing API reference
- doc/JEST_API.md - Jest/Vitest-compatible API examples and migration guide
Quick API Overview
C Tests:
#include "testme.h"
teqi(2 + 2, 4, "Addition test"); // Assert equality
ttrue(value > 0, "Value should be positive"); // Assert condition
tinfo("Test progress..."); // Print info messageJavaScript/TypeScript Tests:
import {expect, describe, test} from 'testme'
// Jest/Vitest-compatible API
await describe('Math operations', () => {
test('addition', () => {
expect(2 + 2).toBe(4)
})
})
// Traditional API
teqi(2 + 2, 4, 'Addition test')
ttrue(value > 0, 'Value should be positive')For complete API documentation including all functions, matchers, and behaviors, see the API reference documents above.
📝 Test File Types
TestMe supports multiple test file types across platforms. All tests should exit with code 0 for success, non-zero for failure.
Shell Tests (.tst.sh)
Shell script tests that are executed directly. Exit code 0 indicates success.
#!/bin/bash
# test_example.tst.sh
echo "Running shell test..."
result=$((2 + 2))
if [ $result -eq 4 ]; then
echo "✓ Math test passed"
exit 0
else
echo "✗ Math test failed"
exit 1
fiPowerShell Tests (.tst.ps1) - Windows
PowerShell script tests for Windows environments.
# test_example.tst.ps1
Write-Host "Running PowerShell test..."
$result = 2 + 2
if ($result -eq 4) {
Write-Host "✓ Math test passed"
exit 0
} else {
Write-Host "✗ Math test failed"
exit 1
}Batch Tests (.tst.bat, .tst.cmd) - Windows
Windows batch script tests.
@echo off
REM test_example.tst.bat
echo Running batch test...
set /a result=2+2
if %result% == 4 (
echo Test passed
exit /b 0
) else (
echo Test failed
exit /b 1
)C Tests (.tst.c)
C programs that are compiled automatically before execution. Include testme.h for built-in testing utilities, or use standard assertions and exit codes.
// test_math.tst.c
#include "testme.h"
int add(int a, int b) {
return a + b;
}
int main() {
tinfo("Running C math tests...\n");
// Test basic arithmetic
teq(add(2, 3), 5, "Addition test");
tneq(add(2, 3), 6, "Addition inequality test");
ttrue(add(5, 0) == 5, "Identity test");
// Test environment variable access
const char *binPath = tget("BIN", "/default/bin");
ttrue(binPath != NULL, "BIN environment variable available");
// Check if running in verbose mode
if (thas("TESTME_VERBOSE")) {
tinfo("Verbose mode enabled\n");
}
return 0;
}C Testing Functions (testme.h)
Equality Tests:
teqi(a, b, msg)- Assert two int values are equalteql(a, b, msg)- Assert two long values are equalteqll(a, b, msg)- Assert two long long values are equalteqz(a, b, msg)- Assert two size_t/ssize values are equaltequ(a, b, msg)- Assert two unsigned int values are equalteqp(a, b, msg)- Assert two pointer values are equaltmatch(str, pattern, msg)- Assert string matches exactly
Inequality Tests:
tneqi(a, b, msg)- Assert two int values are not equaltneql(a, b, msg)- Assert two long values are not equaltneqll(a, b, msg)- Assert two long long values are not equaltneqz(a, b, msg)- Assert two size_t/ssize values are not equaltnequ(a, b, msg)- Assert two unsigned int values are not equaltneqp(a, b, msg)- Assert two pointer values are not equal
Comparison Tests (Greater Than):
tgti(a, b, msg)- Assert a > b (int)tgtl(a, b, msg)- Assert a > b (long)tgtll(a, b, msg)- Assert a > b (long long)tgtz(a, b, msg)- Assert a > b (size_t/ssize)tgtei(a, b, msg)- Assert a >= b (int)tgtel(a, b, msg)- Assert a >= b (long)tgtell(a, b, msg)- Assert a >= b (long long)tgtez(a, b, msg)- Assert a >= b (size_t/ssize)
Comparison Tests (Less Than):
tlti(a, b, msg)- Assert a < b (int)tltl(a, b, msg)- Assert a < b (long)tltll(a, b, msg)- Assert a < b (long long)tltz(a, b, msg)- Assert a < b (size_t/ssize)tltei(a, b, msg)- Assert a <= b (int)tltel(a, b, msg)- Assert a <= b (long)tltell(a, b, msg)- Assert a <= b (long long)tltez(a, b, msg)- Assert a <= b (size_t/ssize)
Boolean and String Tests:
ttrue(expr, msg)- Assert expression is truetfalse(expr, msg)- Assert expression is falsetcontains(str, substr, msg)- Assert string contains substringtnull(ptr, msg)- Assert pointer is NULLtnotnull(ptr, msg)- Assert pointer is not NULL
Control Functions:
tfail(msg)- Unconditionally fail test with message
Environment Functions:
tget(key, default)- Get environment variable with defaulttgeti(key, default)- Get environment variable as integerthas(key)- Check if environment variable existstdepth()- Get current test depth
Output Functions:
tinfo(fmt, ...)- Print informational message (with auto-newline)tdebug(fmt, ...)- Print debug message (with auto-newline)tskip(fmt, ...)- Print skip message (with auto-newline)twrite(fmt, ...)- Print output message (with auto-newline)
Legacy Functions (deprecated):
teq(a, b, msg)- Useteqi()insteadtneq(a, b, msg)- Usetneqi()insteadtassert(expr, msg)- Usettrue()instead
All test macros support optional printf-style format strings and arguments for custom messages.
JavaScript Tests (.tst.js)
JavaScript tests are executed with the Bun runtime. Import the testme module for built-in testing utilities, or use standard assertions.
Note: TestMe automatically installs and links the testme module when running JS tests if not already linked.
// test_array.tst.js
import {teq, tneq, ttrue, tinfo, tget, thas} from 'testme'
tinfo('Running JavaScript tests...')
const arr = [1, 2, 3]
const sum = arr.reduce((a, b) => a + b, 0)
// Test using testme utilities
teq(sum, 6, 'Array sum test')
tneq(sum, 0, 'Array sum is not zero')
ttrue(arr.length === 3, 'Array has correct length')
// Test environment variable access
const binPath = tget('BIN', '/default/bin')
ttrue(binPath !== null, 'BIN environment variable available')
// Check if running in verbose mode
if (thas('TESTME_VERBOSE')) {
tinfo('Verbose mode enabled')
}JavaScript Testing Functions (testme module)
Traditional API:
teq(received, expected, msg)- Assert two values are equaltneq(received, expected, msg)- Assert two values are not equalttrue(expr, msg)- Assert expression is truetfalse(expr, msg)- Assert expression is falsetmatch(str, pattern, msg)- Assert string matches regex patterntcontains(str, substr, msg)- Assert string contains substringtfail(msg)- Fail test with messagetget(key, default)- Get environment variable with defaultthas(key)- Check if environment variable exists (as number)tdepth()- Get current test depthtverbose()- Check if verbose mode is enabledtinfo(...),tdebug(...)- Print informational messagestassert(expr, msg)- Alias forttrue
Jest/Vitest-Compatible API:
TestMe supports a Jest/Vitest-compatible expect() API alongside the traditional t* functions. This allows developers familiar with modern JavaScript testing frameworks to use their preferred syntax:
import {expect} from 'testme'
// Basic assertions
expect(1 + 1).toBe(2) // Strict equality (===)
expect({a: 1}).toEqual({a: 1}) // Deep equality
expect('hello').toContain('ell') // String/array contains
expect([1, 2, 3]).toHaveLength(3) // Length check
// Negation with .not
expect(5).not.toBe(10)
expect('test').not.toContain('xyz')
// Truthiness
expect(true).toBeTruthy()
expect(0).toBeFalsy()
expect(null).toBeNull()
expect(undefined).toBeUndefined()
expect('value').toBeDefined()
// Type checking
expect(new Date()).toBeInstanceOf(Date)
expect('hello').toBeTypeOf('string')
// Numeric comparisons
expect(10).toBeGreaterThan(5)
expect(10).toBeGreaterThanOrEqual(10)
expect(5).toBeLessThan(10)
expect(5).toBeLessThanOrEqual(5)
expect(0.1 + 0.2).toBeCloseTo(0.3) // Floating point comparison
// String/Regex matching
expect('hello world').toMatch(/world/)
expect('[email protected]').toMatch(/^[\w.]+@[\w.]+$/)
// Object matchers
expect({name: 'Alice', age: 30}).toHaveProperty('name', 'Alice')
expect({a: 1, b: 2, c: 3}).toMatchObject({a: 1, b: 2})
expect([{id: 1}, {id: 2}]).toContainEqual({id: 1})
// Error handling
expect(() => {
throw new Error('fail')
}).toThrow('fail')
expect(() => JSON.parse('invalid')).toThrowError(SyntaxError)
// Async/Promise support
await expect(Promise.resolve(42)).resolves.toBe(42)
await expect(Promise.reject(new Error('fail'))).rejects.toThrow()
await expect(fetchData()).resolves.toHaveProperty('status', 'ok')Available Matchers:
- Equality:
toBe(),toEqual(),toStrictEqual() - Truthiness:
toBeTruthy(),toBeFalsy(),toBeNull(),toBeUndefined(),toBeDefined(),toBeNaN() - Type Checking:
toBeInstanceOf(),toBeTypeOf() - Numeric:
toBeGreaterThan(),toBeGreaterThanOrEqual(),toBeLessThan(),toBeLessThanOrEqual(),toBeCloseTo() - Strings/Collections:
toMatch(),toContain(),toContainEqual(),toHaveLength() - Objects:
toHaveProperty(),toMatchObject() - Errors:
toThrow(),toThrowError() - Modifiers:
.not(negation),.resolves(promise resolution),.rejects(promise rejection)
Choosing Between APIs:
- Use
expect()API if you're familiar with Jest/Vitest or prefer expressive, chainable assertions - Use
t*functions if you prefer traditional assertion functions or are writing C-style tests
Both APIs are fully supported and can be mixed in the same project. See doc/JEST_API.md for complete API documentation and migration guide.
TypeScript Tests (.tst.ts)
TypeScript tests are executed with the Bun runtime (includes automatic transpilation). Import the testme module for built-in testing utilities.
Note: TestMe automatically installs and links the testme module when running TS tests if not already linked.
Traditional API:
// test_types.tst.ts
import {teq, ttrue, tinfo, tget} from 'testme'
tinfo('Running TypeScript tests...')
interface User {
name: string
age: number
}
const user: User = {name: 'John', age: 30}
// Test using testme utilities with TypeScript types
teq(user.name, 'John', 'User name test')
teq(user.age, 30, 'User age test')
ttrue(typeof user.name === 'string', 'Name is string type')
ttrue(typeof user.age === 'number', 'Age is number type')
// Test environment variable access with types
const binPath: string | null = tget('BIN', '/default/bin')
ttrue(binPath !== null, 'BIN environment variable available')Jest/Vitest API (with full TypeScript type inference):
// test_api.tst.ts
import {expect} from 'testme'
interface ApiResponse {
status: 'ok' | 'error'
data?: unknown
error?: string
}
const response: ApiResponse = {
status: 'ok',
data: {users: [{id: 1, name: 'Alice'}]},
}
// Type-safe assertions with IntelliSense support
expect(response.status).toBe('ok')
expect(response).toHaveProperty('data')
expect(response.data).toBeDefined()
expect(response).not.toHaveProperty('error')
// Works seamlessly with async/await
async function fetchUser(id: number): Promise<User> {
return {name: 'Alice', age: 30}
}
await expect(fetchUser(1)).resolves.toMatchObject({name: 'Alice'})Test Organization with describe() and test():
TestMe supports organizing tests using describe() blocks and test() functions, compatible with Jest/Vitest workflows:
// test_calculator.tst.ts
import {describe, test, it, expect, beforeEach, afterEach} from 'testme'
await describe('Calculator operations', async () => {
let calculator
beforeEach(() => {
calculator = {value: 0}
})
afterEach(() => {
calculator = null
})
test('starts with zero', () => {
expect(calculator.value).toBe(0)
})
it('it() is an alias for test()', () => {
expect(true).toBeTruthy()
})
test('async operations work', async () => {
await new Promise((resolve) => setTimeout(resolve, 10))
expect(calculator.value).toBe(0)
})
await describe('addition', () => {
test('adds positive numbers', () => {
calculator.value = 2 + 3
expect(calculator.value).toBe(5)
})
test('adds negative numbers', () => {
calculator.value = -2 + -3
expect(calculator.value).toBe(-5)
})
})
})Key Features:
- Top-level
describe()blocks must be awaited - Nested
describe()blocks must be awaited within async describe functions test()functions execute sequentially within a describe blockbeforeEach()andafterEach()hooks run before/after each test in the current describe scope- Hooks are scoped to their describe block and restored when the block exits
- When
expect()is used insidetest(), failures throw errors caught by the test runner - When
expect()is used outsidetest(), failures exit immediately (backward compatible)
Note: TypeScript tests support both the traditional t* functions and the Jest/Vitest expect() API with describe()/test() structure. Both run on the Bun runtime with full TypeScript type checking and IntelliSense support.
Python Tests (.tst.py)
Python tests executed with the Python runtime. Exit code 0 indicates success.
#!/usr/bin/env python3
# test_example.tst.py
import sys
def test_math():
result = 2 + 2
assert result == 4, "Math test failed"
print("✓ Math test passed")
if __name__ == "__main__":
try:
test_math()
sys.exit(0) # Success
except AssertionError as e:
print(f"✗ {e}")
sys.exit(1) # FailureGo Tests (.tst.go)
Go programs that are compiled and executed automatically. Exit code 0 indicates success.
// test_math.tst.go
package main
import (
"fmt"
"os"
)
func add(a, b int) int {
return a + b
}
func main() {
// Test basic arithmetic
if add(2, 3) != 5 {
fmt.Println("✗ Addition test failed")
os.Exit(1)
}
fmt.Println("✓ Addition test passed")
// Test identity
if add(5, 0) != 5 {
fmt.Println("✗ Identity test failed")
os.Exit(1)
}
fmt.Println("✓ Identity test passed")
os.Exit(0)
}🎯 Usage
Command Syntax
tm [OPTIONS] [PATTERNS...]Pattern Matching
Filter tests using various pattern types:
- File patterns:
"*.tst.c","math.tst.js" - Base names:
"math"(matches math.tst.c, math.tst.js, etc.) - Directory names:
"integration","unit/api"(runs all tests in directory) - Path patterns:
"**/math*","test/unit/*.tst.c"
Command Line Options
All available options sorted alphabetically:
| Option | Description |
| ---------------------- | ---------------------------------------------------------------------------------------------------- |
| --chdir <DIR> | Change to directory before running tests |
| --clean | Remove all .testme artifact directories and exit |
| -c, --config <FILE> | Use specific configuration file |
| --continue | Continue running tests even if some fail, always exit with code 0 |
| -d, --debug | Launch debugger (GDB on Linux, Xcode/LLDB on macOS, VS on Windows) |
| --depth <N> | Run tests with depth requirement ≤ N (default: 0) |
| --duration <COUNT> | Set duration with optional suffix (secs/mins/hrs/hours/days). Exports TESTME_DURATION in seconds |
| -h, --help | Show help message |
| --init | Create testme.json5 configuration file in current directory |
| -i, --iterations <N> | Set iteration count (exports TESTME_ITERATIONS for tests to use internally, does not repeat tests) |
| -k, --keep | Keep .testme artifacts after successful tests (failed tests always keep artifacts) |
| -l, --list | List discovered tests without running them |
| --new <NAME> | Create new test file from template (e.g., --new math.c creates math.tst.c) |
| -n, --no-services | Skip all service commands (skip, prep, setup, cleanup) |
| -p, --profile <NAME> | Set build profile (overrides config and PROFILE environment variable) |
| -q, --quiet | Run silently with no output, only exit codes |
| -s, --show | Display test configuration and environment variables |
| --step | Run tests one at a time with prompts (forces serial mode) |
| -v, --verbose | Enable verbose mode with detailed output (sets TESTME_VERBOSE=1) |
| -V, --version | Show version information |
| -w, --workers <N> | Number of parallel workers (overrides config) |
Usage Examples
# Basic usage
tm # Run all tests
tm --list # List tests without running
tm "*.tst.c" # Run only C tests
# Pattern filtering
tm integration # Run all tests in integration/ directory
tm test/unit # Run tests in test/unit/ directory
tm "math*" # Run tests starting with 'math'
tm "**/api*" # Run tests with 'api' in path
# Advanced options
tm -v integration # Verbose output for integration tests
tm --depth 2 # Run tests requiring depth ≤ 2
tm --debug math.tst.c # Debug specific C test
tm -s "*.tst.c" # Show test configuration and environment
tm --keep "*.tst.c" # Keep build artifacts
tm --no-services # Skip service commands (run services externally)
# Configuration
tm -c custom.json5 # Use custom config
tm --chdir /path/to/tests # Change directory firstWorking Directory Behavior
All tests execute with their working directory set to the directory containing the test file, allowing reliable access to relative files and resources.
⚙️ Configuration
Configuration File (testme.json5)
TestMe supports hierarchical configuration using nested testme.json5 files throughout your project structure. Each test file gets its own configuration by walking up from the test file's directory to find the nearest configuration file.
Configuration Discovery Priority (highest to lowest):
- CLI arguments
- Test-specific
testme.json5(nearest to test file) - Project
testme.json5(walking up directory tree) - Built-in defaults
This enables:
- Project-wide defaults at the repository root
- Module-specific overrides in subdirectories
- Test-specific configuration closest to individual tests
- Automatic merging with CLI arguments preserved
{
enable: true,
depth: 0,
compiler: {
c: {
// Compiler selection: 'default' (auto-detect), string (e.g., 'gcc'), or platform map
compiler: {
windows: 'msvc',
macosx: 'clang',
linux: 'gcc',
},
gcc: {
flags: ['-I${../include}'],
libraries: ['m', 'pthread'],
},
clang: {
flags: ['-I${../include}'],
libraries: ['m', 'pthread'],
},
msvc: {
flags: ['/I${../include}'],
libraries: [],
},
},
es: {
require: 'testme',
},
},
execution: {
timeout: 30,
parallel: true,
workers: 4,
},
output: {
verbose: false,
format: 'simple',
colors: true,
},
patterns: {
// Base patterns for all platforms
include: ['**/*.tst.c', '**/*.tst.js', '**/*.tst.ts'],
// Platform-specific additions (merged with base patterns)
windows: {
include: ['**/*.tst.ps1', '**/*.tst.bat', '**/*.tst.cmd'],
},
macosx: {
include: ['**/*.tst.sh'],
},
linux: {
include: ['**/*.tst.sh'],
},
},
services: {
skip: './check-requirements.sh',
prep: 'make build',
setup: 'docker-compose up -d',
cleanup: 'docker-compose down',
skipTimeout: 30,
prepTimeout: 30,
setupTimeout: 30,
cleanupTimeout: 10,
delay: 3,
},
env: {
// Common environment variables for all platforms
TEST_MODE: 'integration',
// Platform-specific environment variables (merged with base)
windows: {
PATH: '${../build/*/bin};%PATH%',
},
linux: {
LD_LIBRARY_PATH: '${../build/*/bin}:$LD_LIBRARY_PATH',
},
macosx: {
DYLD_LIBRARY_PATH: '${../build/*/bin}:$DYLD_LIBRARY_PATH',
},
},
}Configuration Options
Test Control Settings
enable- Enable or disable tests in this directory (default: true)depth- Minimum depth required to run tests (default: 0, requires--depth Nto run)
Compiler Settings
C Compiler Configuration
TestMe automatically detects and configures the appropriate C compiler for your platform:
- Windows: MSVC (Visual Studio), MinGW, or Clang
- macOS: Clang or GCC
- Linux: GCC or Clang
Default Flags (automatically applied):
- GCC/Clang:
-Wall -Wextra -Wno-unused-parameter -Wno-strict-prototypes -O0 -g -I. -I~/.local/include -L~/.local/lib(plus-I/opt/homebrew/include -L/opt/homebrew/libon macOS) - MSVC:
/std:c11 /W4 /Od /Zi /nologo
Note: No -std= flag is specified by default for GCC/Clang, allowing the compiler to use its default standard (typically gnu17 or gnu11) which includes POSIX extensions like strdup(). This makes test code more permissive and easier to write. You can specify a specific standard in your testme.json5 if needed (e.g., -std=c99, -std=c11).
Configuration Options:
compiler.c.compiler- C compiler path (optional, use 'default' to auto-detect, or specify 'gcc', 'clang', or full path)compiler.c.gcc.flags- GCC-specific flags (merged with GCC defaults)compiler.c.gcc.libraries- GCC-specific libraries (e.g.,['m', 'pthread'])compiler.c.gcc.windows.flags- Additional Windows-specific GCC flagscompiler.c.gcc.windows.libraries- Additional Windows-specific GCC librariescompiler.c.gcc.macosx.flags- Additional macOS-specific GCC flagscompiler.c.gcc.macosx.libraries- Additional macOS-specific GCC librariescompiler.c.gcc.linux.flags- Additional Linux-specific GCC flagscompiler.c.gcc.linux.libraries- Additional Linux-specific GCC librariescompiler.c.clang.flags- Clang-specific flags (merged with Clang defaults)compiler.c.clang.libraries- Clang-specific librariescompiler.c.clang.windows.flags- Additional Windows-specific Clang flagscompiler.c.clang.windows.libraries- Additional Windows-specific Clang librariescompiler.c.clang.macosx.flags- Additional macOS-specific Clang flagscompiler.c.clang.macosx.libraries- Additional macOS-specific Clang librariescompiler.c.clang.linux.flags- Additional Linux-specific Clang flagscompiler.c.clang.linux.libraries- Additional Linux-specific Clang librariescompiler.c.msvc.flags- MSVC-specific flags (merged with MSVC defaults)compiler.c.msvc.libraries- MSVC-specific librariescompiler.c.msvc.windows.flags- Additional Windows-specific MSVC flagscompiler.c.msvc.windows.libraries- Additional Windows-specific MSVC libraries
Note: Platform-specific settings (windows, macosx, linux) are additive - they are appended to the base compiler settings, allowing you to specify common settings once and add platform-specific flags/libraries only where needed.
Variable Expansion:
Environment variables in compiler flags and paths support ${...} expansion:
${PLATFORM}- Current platform (e.g., macosx-arm64, linux-x64, win-x64)${OS}- Operating system (macosx, linux, windows)${ARCH}- CPU architecture (arm64, x64, x86)${PROFILE}- Build profile (debug, release, dev, prod, etc.)${CC}- Compiler name (gcc, clang, msvc)${CONFIGDIR}- Directory containing the testme.json5 file${TESTDIR}- Relative path from executable to test file directory${pattern}- Glob patterns (e.g.,${../build/*/bin}expands to matching paths)
Example (Basic):
{
compiler: {
c: {
// Auto-detect best compiler per platform
compiler: {
windows: 'msvc',
macosx: 'clang',
linux: 'gcc',
},
gcc: {
flags: ['-I${../build/${PLATFORM}-${PROFILE}/inc}', '-L${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: ['m', 'pthread'],
},
clang: {
flags: ['-I${../build/${PLATFORM}-${PROFILE}/inc}', '-L${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: ['m', 'pthread'],
},
msvc: {
flags: ['/I${../build/${PLATFORM}-${PROFILE}/inc}', '/LIBPATH:${../build/${PLATFORM}-${PROFILE}/bin}'],
libraries: [],
},
},
},
}Example (Platform-Specific Settings):
{
compiler: {
c: {
gcc: {
// Common flags for all platforms
flags: ['-I..'],
libraries: ['m', 'pthread'],
// Additional macOS-specific settings
macosx: {
flags: ['-framework', 'IOKit', '-framework', 'CoreFoundation'],
libraries: ['objc'],
},
// Additional Linux-specific settings
linux: {
flags: ['-D_GNU_SOURCE'],
libraries: ['rt', 'dl'],
},
},
},
},
}Execution Settings
execution.timeout- Test timeout in seconds (default: 30)execution.parallel- Enable parallel execution (default: true)execution.workers- Number of parallel workers (default: 4)
Output Settings
output.verbose- Enable verbose output (default: false)output.format- Output format: "simple", "detailed", "json" (default: "simple")output.colors- Enable colored output (default: true)
Pattern Settings
Pattern configuration supports platform-specific patterns that are deep blended with base patterns:
patterns.include- Array of include patterns applied to all platformspatterns.exclude- Array of exclude patterns applied to all platformspatterns.windows.include- Additional patterns for Windows (merged with base)patterns.windows.exclude- Additional exclude patterns for Windowspatterns.macosx.include- Additional patterns for macOS (merged with base)patterns.macosx.exclude- Additional exclude patterns for macOSpatterns.linux.include- Additional patterns for Linux (merged with base)patterns.linux.exclude- Additional exclude patterns for Linux
Pattern Merging Behavior:
Platform-specific patterns are added to base patterns, not replaced:
- Start with base
includeandexcludepatterns - On the current platform, add platform-specific patterns to the base
- Result is the union of base patterns and platform-specific patterns
Example:
{
patterns: {
// Base patterns for all platforms
include: ['**/*.tst.c', '**/*.tst.js', '**/*.tst.ts'],
exclude: ['**/node_modules/**'],
// Windows-specific additions
windows: {
include: ['**/*.tst.ps1', '**/*.tst.bat'], // Added on Windows only
exclude: ['**/wsl/**'], // Excluded on Windows only
},
// macOS-specific additions
macosx: {
include: ['**/*.tst.sh'], // Shell tests on macOS
},
// Linux-specific additions
linux: {
include: ['**/*.tst.sh'], // Shell tests on Linux
},
},
}On Windows, the effective patterns would be:
- Include:
**/*.tst.c,**/*.tst.js,**/*.tst.ts,**/*.tst.ps1,**/*.tst.bat - Exclude:
**/node_modules/**,**/wsl/**
On macOS/Linux, the effective patterns would be:
- Include:
**/*.tst.c,**/*.tst.js,**/*.tst.ts,**/*.tst.sh - Exclude:
**/node_modules/**
Service Settings
Service scripts execute in a specific order to manage test environment lifecycle:
Global Services (run once before/after all test groups):
services.globalPrep- Command to run once before any test groups are processed (waits for completion)- Uses the shallowest (closest to filesystem root) testme.json5 found from all discovered test directories
- Walks up from each test directory and selects the config with fewest directory levels
- Example: Tests in
/project/xxxx/test/unit/→ uses/project/xxxx/testme.json5if it exists (shallowest) - Runs with shallowest configuration environment
- Use for project-wide setup: building shared libraries, starting databases, etc.
- If global prep fails, all test execution is aborted
services.globalPrepTimeout- Global prep timeout in seconds (default: 30)services.globalCleanup- Command to run once after all test groups complete (waits for completion)- Uses the same shallowest configuration as globalPrep
- Runs with shallowest configuration environment
- Use for project-wide teardown: cleaning shared resources, stopping databases, etc.
- Receives
TESTME_SUCCESS(1 if all tests passed, 0 otherwise) - Errors logged but don't fail the test run
services.globalCleanupTimeout- Global cleanup timeout in seconds (default: 10)
Per-Group Services (run for each configuration group):
services.skip- Script to check if tests should run (exit 0=run, non-zero=skip)services.environment- Script to emit environment variables as key=value linesservices.prep- Command to run once before tests in this group begin (waits for completion)- Runs with the configuration group's environment
- Use for group-specific setup: compiling test fixtures, etc.
services.setup- Command to start background service during test executionservices.cleanup- Command to run after all tests in this group complete- Runs with the configuration group's environment
- Use for group-specific teardown: cleaning test fixtures, etc.
- Receives
TESTME_SUCCESS(1 if all tests in group passed, 0 otherwise)
services.skipTimeout- Skip script timeout in seconds (default: 30)services.environmentTimeout- Environment script timeout in seconds (default: 30)services.prepTimeout- Prep command timeout in seconds (default: 30)services.setupTimeout- Setup command timeout in seconds (default: 30)services.cleanupTimeout- Cleanup command timeout in seconds (default: 10)services.setupDelay- Delay after setup starts before running tests in seconds (default: 1)- Replaces deprecated
services.delayfield - Allows services time to initialize before tests begin
- Note: If
healthCheckis configured, setupDelay is ignored in favor of active health checking
- Replaces deprecated
services.healthCheck- Configuration for actively monitoring service readiness (optional)- When configured, TestMe polls the service instead of using a fixed setupDelay
- Provides faster test execution (starts tests as soon as service is ready)
- More reliable than arbitrary delays (won't start tests before service is ready)
- Supports four health check types:
- HTTP/HTTPS: Checks endpoint status and optional response body
healthCheck: { type: 'http', // Optional: defaults to 'http' url: 'http://localhost:3000/health', expectedStatus: 200, // Optional: defaults to 200 expectedBody: 'OK', // Optional: substring match interval: 100, // Optional: poll interval in ms (default: 100) timeout: 30 // Optional: max wait in seconds (default: 30) } - TCP: Verifies port is accepting connections
healthCheck: { type: 'tcp', host: 'localhost', port: 5432, timeout: 60 } - Script: Executes custom health check command
healthCheck: { type: 'script', command: 'redis-cli ping', expectedExit: 0, // Optional: defaults to 0 timeout: 10 } - File: Checks for existence of ready marker file
healthCheck: { type: 'file', path: '/tmp/daemon.ready', timeout: 30 }
- HTTP/HTTPS: Checks endpoint status and optional response body
- Common settings (all types):
interval- Poll interval in milliseconds (default: 100)timeout- Maximum wait time in seconds (default: 30)
- Example use cases:
- Web servers: HTTP check on
/healthendpoint - Databases: TCP port check (PostgreSQL, MySQL, Redis)
- Message queues: TCP or script-based check
- Custom services: File marker or script validation
- Web servers: HTTP check on
services.shutdownTimeout- Maximum wait time for graceful shutdown before SIGKILL in seconds (default: 5)- After sending SIGTERM (Unix) or graceful taskkill (Windows), polls every 100ms to check if process exited
- If process exits gracefully within shutdownTimeout, SIGKILL is skipped (no unnecessary force-kill)
- If process still running after shutdownTimeout, sends SIGKILL (Unix) or taskkill /F (Windows) to force termination
- Default of 5 seconds allows most services to clean up gracefully without unnecessary waiting
- Set to 0 to immediately force-kill without any wait time
- Useful for services that need time to clean up resources (databases, file handles, network connections)
Execution Order:
1. Global Prep (once)
2. For each configuration group:
a. Skip
b. Environment
c. Prep
d. Setup
e. Tests execute
f. Cleanup
3. Global Cleanup (once)Environment Variables
env- Object defining environment variables to set during test execution- Environment variable values support
${...}expansion using glob patterns - Paths are resolved relative to the configuration file's directory
- Supports platform-specific overrides via
windows,macosx, andlinuxkeys - Platform-specific variables are merged with base variables (platform values override base)
- Useful for providing dynamic paths to build artifacts, libraries, and test data
Special Variables Automatically Exported:
TestMe automatically exports these special environment variables to all tests and service scripts (skip, prep, setup, cleanup):
TESTME_PLATFORM- Combined OS and architecture (e.g.,macosx-arm64,linux-x64,windows-x64)TESTME_PROFILE- Build profile from--profileflag, configprofilesetting,PROFILEenv var, or defaultdevTESTME_OS- Operating system (macosx,linux,windows)TESTME_ARCH- Architecture (x64,arm64,ia32)TESTME_CC- C compiler detected or configured (gcc,clang,msvc)TESTME_TESTDIR- Relative path from executable directory to test file directoryTESTME_CONFIGDIR- Relative path from executable directory to configuration file directoryTESTME_VERBOSE- Set to1when--verboseflag is used,0otherwiseTESTME_QUIET- Set to1when--quietflag is used,0otherwiseTESTME_KEEP- Set to1when--keepflag is used,0otherwiseTESTME_DEPTH- Current depth value from--depthflagTESTME_ITERATIONS- Iteration count from--iterationsflag (defaults to1)- Note: TestMe does NOT automatically repeat test execution. This variable is provided for tests to implement their own iteration logic internally if needed.
TESTME_DURATION- Duration in seconds from--durationflag (only set if specified). Tests and service scripts can use this value for timing-related operations or test duration control.
These variables are available in all test and service script environments and can be used in shell scripts (e.g., $TESTME_PLATFORM), C code (via getenv("TESTME_PLATFORM")), or JavaScript/TypeScript (via process.env.TESTME_PLATFORM).
Platform Configuration Formats:
- Simple string values - Apply to all platforms
- Platform override sections - Legacy format with platform-specific env sections
- Supports
windows,macosx,linux, anddefaultsections defaultsection provides fallback values for all platforms- Platform-specific sections override default values
- Supports
- Default/platform pattern - Per-variable platform overrides with default fallback
Examples:
{
env: {
// Simple string values (all platforms)
TEST_MODE: 'integration',
BIN: '${../build/*/bin}',
// Default/platform pattern (per-variable platform-specific values)
LIB_EXT: {
default: '.so', // Used if platform not specified
windows: '.dll',
macosx: '.dylib',
linux: '.so',
},
// Legacy platform override sections
// Processed in order: base vars → default → platform-specific
default: {
PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:${PATH}',
},
windows: {
PATH: '${../build/${PLATFORM}-${PROFILE}/bin};%PATH%',
},
macosx: {
DYLD_LIBRARY_PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:$DYLD_LIBRARY_PATH',
},
linux: {
LD_LIBRARY_PATH: '${../build/${PLATFORM}-${PROFILE}/bin}:$LD_LIBRARY_PATH',
},
},
}Priority Order (later overrides earlier):
- Base environment variables (simple string values and default/platform pattern)
env.defaultsection (legacy format)- Platform-specific section:
env.windows,env.macosx, orenv.linux
Accessing Environment Variables:
- C tests:
getenv("BIN")or usetget("BIN", default)from testme.h - Shell tests:
$BINor${BIN} - JavaScript/TypeScript:
process.env.BINor usetget("BIN", default)from testme.js
📚 Common Use Cases
Testing a Multi-Platform C Project
{
compiler: {
c: {
gcc: {
flags: [
'-I${../include}',
'-L${../build/${PLATFORM}-${PROFILE}/lib}',
'-Wl,-rpath,@executable_path/${CONFIGDIR}/../build/${PLATFORM}-${PROFILE}/lib',
],
libraries: ['mylib', 'm', 'pthread'],
},
msvc: {
flags: ['/I${../include}', '/LIBPATH:${../build/${PLATFORM}-${PROFILE}/lib}'],
libraries: ['mylib'],
},
},
},
env: {
MY_LIB_PATH: '${../build/${PLATFORM}-${PROFILE}/lib}',
},
}Running Tests with Docker Services
{
services: {
prep: 'docker-compose build',
setup: 'docker-compose up -d',
cleanup: 'docker-compose down',
delay: 5, // Wait 5 seconds for services to start
},
env: {
DATABASE_URL: 'postgresql://localhost:5432/testdb',
REDIS_URL: 'redis://localhost:6379',
},
}Conditional Test Execution
{
services: {
skip: './check-requirements.sh', // Exit 0 to run, non-zero to skip
},
}check-requirements.sh:
#!/bin/bash
# Skip tests if required tools are missing
if ! command -v docker &> /dev/null; then
echo "Docker not found - skipping integration tests"
exit 1
fi
exit 0Organizing Tests by Depth
Root testme.json5 (quick unit tests):
{
depth: 0,
execution: {timeout: 5},
}integration/testme.json5 (slow integration tests):
{
depth: 1,
execution: {timeout: 60},
services: {
setup: './start-services.sh',
cleanup: './stop-services.sh',
},
}Run with: tm --depth 1 to include integration tests, or just tm for unit tests only.
📁 Artifact Management
TestMe automatically creates .testme directories alongside test files for C compilation artifacts:
project/tests/
├── math.tst.c
└── .testme/
├── math # Compiled binary
└── compile.log # Compilation outputAutomatic Cleanup:
- Artifacts are automatically removed after successful tests
- Failed tests preserve artifacts for debugging
- Empty
.testmedirectories are removed automatically
Manual Control:
tm --keep- Preserve artifacts from successful teststm --clean- Remove all.testmedirectories and exit
🐛 Debugging Tests
TestMe includes integrated debugging support for all test languages. Use the --debug flag to launch tests in debug mode.
C Test Debugging
# Debug a specific C test
tm --debug math.tst.c
# Configured debuggers (auto-selected by platform):
# - macOS: xcode (default), lldb, gdb, vscode
# - Linux: gdb (default), lldb, vscode
# - Windows: vs (default), vscodeConfiguration:
{
debug: {
c: 'xcode', // Use specific debugger
// Or use platform map:
c: {
macosx: 'xcode',
linux: 'gdb',
windows: 'vs',
},
},
}JavaScript/TypeScript Debugging
# Debug JavaScript test
tm --debug api.tst.js
# Debug TypeScript test
tm --debug types.tst.tsVSCode Workflow (default):
Prerequisites:
- Install the Bun VSCode extension
Steps:
- Run
tm --debug test.tst.js(or.tst.ts) - TestMe automatically creates
.vscode/launch.jsonin test directory - VSCode opens with the workspace
- Open your test file and set breakpoints
- Press F5 or Run > Start Debugging
- Select "Debug Bun Test" configuration
Configuration:
{
debug: {
js: 'vscode', // JavaScript debugger (default)
// js: 'cursor', // Or use Cursor editor
ts: 'vscode', // TypeScript debugger (default)
// ts: 'cursor', // Or use Cursor editor
},
}Supported Debuggers:
vscode- Visual Studio Code (default)cursor- Cursor editor (VSCode fork)- Custom path to debugger executable
Python Test Debugging
# Debug with pdb (default)
tm --debug test.tst.py
# Configure to use VSCode
# Edit testme.json5: debug: { py: 'vscode' }pdb commands:
h- helpb <line>- set breakpointc- continuen- next lines- step intop <var>- print variableq- quit
Configuration:
{
debug: {
py: 'pdb', // Use Python debugger (default)
// py: 'vscode', // Or use VSCode
},
}Go Test Debugging
# Debug with delve (default)
tm --debug calc.tst.go
# Configure to use VSCode
# Edit testme.json5: debug: { go: 'vscode' }delve commands:
help- show helpbreak <line>- set breakpointcontinue- continue executionnext- step overstep- step intoprint <var>- print variableexit- exit debugger
Configuration:
{
debug: {
go: 'delve', // Use Delve debugger (default)
// go: 'vscode', // Or use VSCode
},
}Custom Debugger
You can specify a custom debugger executable path:
{
debug: {
c: '/usr/local/bin/my-gdb',
py: '/opt/debugger/pdb-enhanced',
},
}🔍 Output Formats
Simple Format (Default)
🧪 Test runner starting in: /path/to/project
Running tests...
✓ PASS string.tst.ts (11ms)
✓ PASS array.tst.js (10ms)
✓ PASS hello.tst.sh (4ms)
✓ PASS math.tst.c (204ms)
============================================================
TEST SUMMARY
============================================================
✓ Passed: 4
✗ Failed: 0
! Errors: 0
- Skipped: 0
Total: 4
Duration: 229ms
Result: PASSEDDetailed Format
Shows full output from each test including compilation details for C tests.
JSON Format
Machine-readable output for integration with other tools:
{
"summary": {
"total": 4,
"passed": 4,
"failed": 0,
"errors": 0,
"skipped": 0,
"totalDuration": 229
},
"tests": [
{
"file": "/path/to/test.tst.js",
"type": "javascript",
"status": "passed",
"duration": 10,
"exitCode": 0
}
]
}🧪 Development
Building
bun run buildRunning Tests
bun testDevelopment Mode
bun --hot src/index.tsCode Style
- 4-space indentation
- TypeScript with strict mode
- ESLint and Prettier configured
- Comprehensive JSDoc comments
💡 Tips and Best Practices
Writing Effective Tests
- Use descriptive names:
user_authentication.tst.tsnottest1.tst.ts - Keep tests focused: One concept per test file
- Exit with proper codes: 0 for success, non-zero for failure
- Document test purpose: Add comments explaining what's being validated
Performance Optimization
- Use parallel execution for independent tests (default behavior)
- Adjust worker count based on system resources:
tm -w 8 - Clean artifacts regularly:
tm --clean
Troubleshooting
| Problem | Solution |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Tests not discovered | Check file extensions (.tst.sh, .tst.c, .tst.js, .tst.ts)Use --list to see what's foundCheck enable: false in configVerify depth requirements |
| C compilation failing | Check GCC/Clang is in PATHReview .testme/compile.logUse -s to see compile commands |
| Permission errors | Make shell scripts executable: chmod +x *.tst.shCheck directory permissions |
| Tests skipped | Check skip script exit codeVerify --depth is sufficientRun with -v for details |
Windows-Specific Issues
| Problem | Solution |
| ------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| tm not recognized | Add %USERPROFILE%\.local\bin to PATHOr run from installation directory |
| PowerShell execution policy | TestMe uses -ExecutionPolicy Bypass automaticallyOr set: Set-ExecutionPolicy RemoteSigned -Scope CurrentUser |
| Compiler not found | Run from "Developer Command Prompt for VS 2022"Or add compiler to PATH manually |
| MSVC vs GCC flags | Check testme.json5 compiler configMSVC: /W4 /std:c11GCC/MinGW: -Wall -Wextra (no -std by default) |
| Shell tests fail on Windows | Install Git for Windows (includes bash)Or convert to .tst.ps1 or .tst.bat |
📦 Publishing
If you're a package maintainer or want to contribute TestMe to additional repositories:
- Publishing Guide - Complete step-by-step instructions for publishing to all package repositories
- Quick Reference - One-page reference for package publishing
- Installation Packages - Package configurations and build instructions
For detailed documentation, see man tm or the Design Documentation.
