yamltest
v1.1.4
Published
Declarative YAML-based HTTP, command, and Kubernetes test runner
Maintainers
Readme
YAMLTest
Declarative YAML-based test runner for HTTP endpoints, shell commands, and Kubernetes resources.
Define tests in YAML, run them from the CLI or import them programmatically. No test framework boilerplate required.
Installation
From npm (once published):
npm install -g yamltestFrom a local clone (before publishing):
# Install globally from the repo directory
npm install -g /path/to/YAMLTest
# Or run directly without installing
node /path/to/YAMLTest/src/cli.js -f your-tests.yaml
# Or link it for development (makes YAMLTest available system-wide, auto-updates)
cd /path/to/YAMLTest && npm linkAs a dev dependency in another project:
npm install --save-dev /path/to/YAMLTestQuick start
YAMLTest -f - <<EOF
- name: httpbin returns 200
http:
url: "https://httpbin.org"
method: GET
path: "/get"
source:
type: local
expect:
statusCode: 200
bodyContains: "httpbin.org"
EOFOutput:
✓ httpbin returns 200 312ms
1 passed | 1 totalCLI
USAGE
YAMLTest -f <file.yaml>
YAMLTest -f - # read from stdin (heredoc)
OPTIONS
-f, --file <path|-> YAML file to run, or - for stdin
-h, --help Show this help
ENVIRONMENT
DEBUG_MODE=true Enable verbose debug logging
NO_COLOR=1 Disable ANSI colour outputExit codes: 0 = all passed, 1 = one or more failed.
Input validation
All test definitions are validated against a strict schema before any test executes. Invalid input produces clear error messages pointing to the exact location of the problem:
Validation failed:
Test #1 ("login") /http/method: must be one of: GET, POST, PUT, DELETE, PATCH
Test #3: must define exactly one of: http, command, wait, httpBodyComparison
Test #5 ("check pods") /source: missing required property "selector"Validation checks include:
- Exactly one test type per definition (
http,command,wait, orhttpBodyComparison) - Required fields per test type (e.g.,
command.command,wait.target) - Value types and allowed enums (e.g.,
source.typemust belocalorpod) - Conditional requirements (e.g.,
source.type: podrequiresselector;setVarsrequiresexpectfor HTTP/command tests) - Unknown property detection on sub-objects (catches typos like
htppinstead ofhttp)
Programmatic API
const { runTests, executeTest, validateTestDefinitions } = require('yamltest');
// Run one or more tests from a YAML string (array or single object)
const result = await runTests(yamlString);
console.log(result.passed, result.failed, result.skipped, result.total);
// result.results → [{name, passed, error, durationMs, attempts}]
// Run a single test (low-level)
await executeTest(yamlString); // returns true or throws
// Validate test definitions without executing them
const yaml = require('js-yaml');
const definitions = Array.isArray(yaml.load(yamlString))
? yaml.load(yamlString)
: [yaml.load(yamlString)];
validateTestDefinitions(definitions); // throws on validation errorsTest format
Tests are defined as flat YAML objects (or an array of them). All fields except the test type key (http, command, wait, httpBodyComparison) are optional.
- name: my-test # optional display name
retries: 3 # retry up to N times on failure (default: 0)
http: ... # ← test type
source:
type: local # local | pod
expect:
statusCode: 200Test types
HTTP test
Test any HTTP endpoint locally or from within a Kubernetes pod.
- name: health check
http:
url: "https://api.example.com" # required (or auto-discovered for Service selectors)
method: GET # GET (default) | POST | PUT | DELETE | PATCH
path: "/health" # default: /
headers:
Authorization: "Bearer ${API_TOKEN}"
Content-Type: application/json
params:
key: value # query parameters
body: '{"foo":"bar"}' # request body (string or object)
skipSslVerification: true # disable TLS verification
maxRedirects: 0 # redirects to follow (default: 0)
cert: /path/to/cert.pem # mTLS client certificate
key: /path/to/key.pem
ca: /path/to/ca.pem
source:
type: local
expect:
statusCode: 200 # or [200, 201, 202]
body: "exact body match"
bodyContains: "substring" # or array, or {value, negate, matchword}
bodyRegex: "pattern.*" # or {value, negate}
bodyJsonPath:
- path: "$.user.id"
comparator: equals
value: 42
headers:
- name: content-type
comparator: contains
value: application/jsonEnvironment variable substitution
Any $VAR or ${VAR} in the url field is resolved from the environment:
http:
url: "${API_BASE_URL}"Pod-based HTTP test
Execute the HTTP request from inside a Kubernetes pod (useful for internal service testing):
source:
type: pod
selector:
kind: Pod
metadata:
namespace: production
labels:
app: test-client
context: my-cluster # optional kubectl context
container: my-container # optional
usePortForward: true # use kubectl port-forward instead of debug pod
usePodExec: true # use kubectl exec + curlAuto-discovery for Kubernetes Services
Omit http.url when source.selector.kind is Service and the IP/port are discovered automatically from the LoadBalancer status:
- name: auto-discover service
http:
method: GET
path: /health
scheme: https # optional, defaults to http
port: 443 # optional port name/number/index
source:
type: local
selector:
kind: Service
metadata:
namespace: production
name: my-service
expect:
statusCode: 200Command test
Run any shell command and validate its output.
- name: check kubectl version
command:
command: "kubectl version -short"
parseJson: false # parse stdout as JSON (default: false)
env:
MY_VAR: value # extra environment variables
workingDir: /tmp # working directory
source:
type: local # or pod (uses kubectl exec)
expect:
exitCode: 0
stdout:
contains: "Client Version" # or: equals, matches/regex, negate
stderr:
contains: ""Multiple stdout expectations (all must pass):
expect:
exitCode: 0
stdout:
- contains: "Running"
- matches: "\\d+ pods"JSON output validation:
- name: cluster info JSON
command:
command: "kubectl cluster-info --output=json"
parseJson: true
source:
type: local
expect:
exitCode: 0
jsonPath:
- path: "$.Kubernetes"
comparator: existsWait test
Poll a Kubernetes resource until a condition is met (or timeout).
- name: wait for deployment
wait:
target:
kind: Deployment
metadata:
namespace: default
name: my-app
context: my-cluster # optional
jsonPath: "$.status.readyReplicas"
jsonPathExpectation:
comparator: greaterThan
value: 0
polling:
timeoutSeconds: 120 # default: 60
intervalSeconds: 5 # default: 2
maxRetries: 24 # optional upper bound
setVars: # optional: capture the extracted value
READY_REPLICAS:
value: trueTo use the captured value in subsequent tests, see the setVars section below.
Selector by labels:
wait:
target:
kind: Pod
metadata:
namespace: production
labels:
app: web-server
version: v1.2.0
jsonPath: "$.status.phase"
jsonPathExpectation:
comparator: equals
value: RunningHTTP body comparison test
Compare the response bodies of two HTTP calls and assert they are identical (useful for canary / shadow traffic validation).
- name: compare two backends
httpBodyComparison:
request1:
http:
url: "http://service-v1"
method: GET
path: /api/data
source:
type: local
request2:
http:
url: "http://service-v2"
method: GET
path: /api/data
source:
type: local
parseAsJson: true # parse bodies as JSON before comparing
delaySeconds: 1 # wait between requests
removeJsonPaths: # ignore dynamic fields
- "$.timestamp"
- "$.requestId"Expectation operators
All comparators can be negated with negate: true.
| Comparator | Description | Types |
|---------------|--------------------------------------|----------------|
| equals | Deep equality | any |
| contains | Substring / JSON-stringified search | string, object |
| matches | Regular expression test | string |
| exists | Value is not null/undefined | any |
| greaterThan | Numeric > | number |
| lessThan | Numeric < | number |
Negation example
expect:
bodyContains:
value: "error"
negate: true # assert the body does NOT contain "error"Word-boundary match
expect:
bodyContains:
value: "ok"
matchword: true # uses \bok\b regex (whole word only)Advanced features
Retry on failure
- name: flaky service
retries: 5 # retry up to 5 times, 500ms between attempts
http:
url: "http://flaky-service"
method: GET
path: /api
source:
type: local
expect:
statusCode: 200Multiple tests in one file
Tests run sequentially and stop at the first failure (fail-fast).
- name: first test
http: { url: "http://svc", method: GET, path: /ready }
source: { type: local }
expect: { statusCode: 200 }
- name: second test
command: { command: "kubectl get pods -n default" }
source: { type: local }
expect: { exitCode: 0, stdout: { contains: "Running" } }Environment variables in URL
http:
url: "$API_BASE_URL" # $VAR or ${VAR}
headers:
Authorization: "Bearer ${API_TOKEN}"setVars — variable passing between steps
Extract values from a test response and store them for use in subsequent steps via ${VAR_NAME} syntax. setVars requires expect to be present on the test — variables are only captured after all assertions pass.
HTTP extraction sources
- name: login
http:
url: "http://api.example.com"
method: POST
path: /login
body: '{"user":"admin","pass":"secret"}'
source:
type: local
expect:
statusCode: 200
setVars:
AUTH_TOKEN:
jsonPath: "$.token" # extract from JSON body via JSONPath
SESSION_ID:
header: "x-session-id" # extract a response header (case-insensitive)
STATUS_CODE:
statusCode: true # capture the HTTP status code
RAW_BODY:
body: true # capture the full response body
CSRF_TOKEN:
regex: # extract via regex capture group from body
pattern: 'name="csrf" value="([^"]+)"'
group: 1 # capture group index (default: 1)Command extraction sources
- name: read config
command:
command: "cat config.json"
parseJson: true # required for jsonPath extraction
source:
type: local
expect:
exitCode: 0
setVars:
DB_HOST:
jsonPath: "$.database.host" # extract from parsed JSON stdout
FULL_OUTPUT:
stdout: true # capture full stdout
ERR_OUTPUT:
stderr: true # capture full stderr
EXIT:
exitCode: true # capture exit code
PID:
regex: # extract via regex from stdout or stderr
source: stdout # "stdout" (default) or "stderr"
pattern: "PID (\\d+)"
group: 1Wait extraction source
- name: wait for deployment
wait:
target:
kind: Deployment
metadata:
namespace: default
name: my-app
jsonPath: "$.status.readyReplicas"
jsonPathExpectation:
comparator: greaterThan
value: 0
setVars:
READY_REPLICAS:
value: true # capture the jsonPath-extracted valueChaining example: login then access protected endpoint
- name: login
http:
url: "http://localhost:3000"
method: POST
path: /login
body: '{"user":"admin","pass":"secret"}'
source:
type: local
expect:
statusCode: 200
setVars:
AUTH_TOKEN:
jsonPath: "$.token"
- name: access protected endpoint
http:
url: "http://localhost:3000"
method: GET
path: /protected
headers:
Authorization: "Bearer ${AUTH_TOKEN}"
source:
type: local
expect:
statusCode: 200Shell heredoc gotcha: if you feed YAML via a heredoc (
<<EOF), the shell expands$AUTH_TOKENin the heredoc body to its value in the parent shell (usually empty) before YAMLTest ever sees the text. Use one of these patterns to prevent that:# 1. Escape the dollar sign YAMLTest -f - <<EOF command: echo \$AUTH_TOKEN EOF # 2. Quote the heredoc delimiter (disables all expansion) YAMLTest -f - <<'EOF' command: echo $AUTH_TOKEN EOF # 3. Use a file — no shell expansion at all (recommended) YAMLTest -f tests.yaml
Debug logging
DEBUG_MODE=true YAMLTest -f tests.yamlPrints full request/response details, comparison results, and kubectl commands.
Project structure
src/
core.js # Test execution engine (HTTP, command, wait, comparison)
runner.js # Multi-test orchestration (YAML parsing, validation, fail-fast, retry)
validate.js # JSON Schema validation (Ajv)
index.js # Public API
cli.js # YAMLTest binary entry point
test/
unit/ # Pure function tests (compareValue, filterJson, parseCurl, ...)
integration/ # Real HTTP server + real shell command tests
e2e/ # CLI binary spawned end-to-endRunning the test suite
npm test # all tests
npm run test:unit # unit tests only
npm run test:integration # integration tests only
npm run test:e2e # end-to-end CLI tests only
npm run test:coverage # with coverage reportCI/CD
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install
- run: npm testLicense
MIT
