anton-bakker-deploy-engine
v0.5.5
Published
Self-validating AWS deployment engine with 4-layer verification: exists, configured, functional, project-specific
Maintainers
Readme
anton-bakker-deploy-engine
A self-validating AWS deployment verification library with a 4-layer verification model and a config-driven pipeline engine.
Zero runtime dependencies. TypeScript. ESM. Node.js ≥ 20.
Table of Contents
- Overview
- Installation
- Architecture
- Verification Functions
- Pipeline Engine
- Promotion Gate
- Slack Notifier
- Types Reference
- Publishing
- Development
- Licence
Overview
Every AWS deployment can fail silently — tables missing, ECS tasks crashing, environment variables leaking into production builds. This library provides composable verification functions and a pipeline engine that catches these problems automatically.
The 4-layer verification model:
| Layer | Question | Examples | |-------|----------|---------| | Exists | Is the resource present? | amplify_outputs.json readable, index JS bundle exists | | Configured | Is it set up correctly? | Version is 1.4, data.url is HTTPS, no localhost leaks | | Functional | Is it working at runtime? | ECS service healthy, API responds, DynamoDB tables present | | Project-specific | Does it match our source of truth? | Table count matches schema, Cognito groups match auth stack |
You can use the verification functions standalone (for scripts, CI checks, post-deploy validation) or wire them into the pipeline engine for a full gated deployment workflow.
Installation
npm install anton-bakker-deploy-engineRequires Node.js ≥ 20. Zero runtime dependencies — only uses Node.js built-ins (fs, path).
Architecture
src/
├── types.ts # All type definitions (CheckResult, GateDefinition, etc.)
├── verify.ts # Standalone verification functions (Layer 1–4 checks)
├── pipeline.ts # Gate builder + default environment configs
├── engine.ts # PipelineEngine class (gate executor, state, resume)
├── slack.ts # Optional Slack notifier (threading, severity, configurable)
└── index.ts # Public API — all exportsEverything is exported from the package root:
import {
// Verification functions
verifyAmplifyOutputs, verifyBuildArtifact, parseTableCount,
assertTableCount, parseEcsStatus, assertCognitoGroups,
assertApiHealth, deriveExpectations, summarize, promotionCheck,
// Pipeline engine
PipelineEngine, buildGates, DEFAULT_ENVIRONMENTS,
// Slack notifier (optional)
SlackNotifier,
// Types
type CheckResult, type GateDefinition, type GateResult,
type EnvironmentConfig, type PipelineState, type PipelineReport,
type CheckExecutor, type ExecutionContext,
type SlackNotifierOptions, type SlackMessageConfig,
type SlackSeverity, type SlackEventConfig,
} from 'anton-bakker-deploy-engine';Verification Functions
All verification functions return CheckResult or CheckResult[]. Every result has:
interface CheckResult {
name: string; // Human-readable check description
passed: boolean; // Did it pass?
expected?: string | number; // What was expected
actual?: string | number; // What was found
duration?: number; // Milliseconds (set by pipeline engine)
error?: string; // Error message when passed === false
}verifyAmplifyOutputs
Validates that amplify_outputs.json has all required fields for a working Amplify deployment.
function verifyAmplifyOutputs(outputsPath: string): CheckResult[]Checks performed:
versionfield is present and equals"1.4"data.urlis present and starts withhttps://auth.user_pool_idis presentauth.user_pool_client_idis present
Example:
const checks = verifyAmplifyOutputs('./amplify_outputs.json');
// Returns 6 CheckResults
// All passed?
if (checks.every(c => c.passed)) {
console.log('amplify_outputs.json is valid');
}
// Find failures
const failures = checks.filter(c => !c.passed);
failures.forEach(f => console.error(`${f.name}: ${f.error}`));Error handling: If the file doesn't exist or contains invalid JSON, returns a single failed CheckResult with the error message.
verifyBuildArtifact
Scans the built JavaScript bundle for environment leaks that should never reach production.
function verifyBuildArtifact(distDir: string): CheckResult[]Parameters:
distDir— path to the build output directory (expectsdist/assets/index-*.js)
Checks performed:
- No
localhost:4000in the bundle (GraphQL sandbox leak) - No
localhost:5173in the bundle (Vite dev server leak) - No
http://...elb.amazonaws.comURLs (insecure ALB URLs)
Example:
const checks = verifyBuildArtifact('./dist');
const leaks = checks.filter(c => !c.passed);
if (leaks.length > 0) {
console.error('Environment leaks detected in build:');
leaks.forEach(l => console.error(` ${l.name}: ${l.actual}`));
process.exit(1);
}Error handling: Returns a failed result if dist/assets doesn't exist or no index-*.js file is found.
parseTableCount
Parses DynamoDB table count from AWS CLI output, filtering by a table name prefix.
function parseTableCount(
awsOutput: string | null,
prefix: string
): CheckResult & { count: number }Parameters:
awsOutput— raw output fromaws dynamodb list-tables(JSON array of table names) or a plain number stringprefix— table name prefix to filter by (e.g."s20-dev")
Example:
import { execSync } from 'child_process';
const output = execSync('aws dynamodb list-tables --output json --query TableNames').toString();
const result = parseTableCount(output, 's20-dev');
console.log(`Found ${result.count} tables with prefix s20-dev`);Accepts two input formats:
- JSON array:
["s20-dev-Employee", "s20-dev-Division", "other-table"]→ filters by prefix, returns count - Plain number:
"124\n"→ parses as integer
assertTableCount
Verifies the actual table count meets an expected minimum.
function assertTableCount(actual: number, expected: number): CheckResultExample:
const { count } = parseTableCount(awsOutput, 's20-dev');
const check = assertTableCount(count, 124);
if (!check.passed) {
console.error(check.error); // "4 tables missing"
}parseEcsStatus
Parses ECS service health from AWS CLI JSON output.
function parseEcsStatus(awsOutput: string | null): CheckResultParameters:
awsOutput— JSON string withstatus,running/runningCount, anddesired/desiredCountfields
Passes when: status === 'ACTIVE' AND running >= desired AND desired > 0
Example:
const output = execSync(`aws ecs describe-services \
--cluster my-cluster --services my-service \
--query 'services[0].{status:status,running:runningCount,desired:desiredCount}'`
).toString();
const check = parseEcsStatus(output);
if (!check.passed) {
console.error(check.error); // "2 tasks not running"
}Accepts both field name styles: running/desired and runningCount/desiredCount.
assertCognitoGroups
Verifies that all required Cognito user pool groups exist.
function assertCognitoGroups(
actualGroups: string[],
requiredGroups: string[]
): CheckResultExample:
const output = execSync(`aws cognito-idp list-groups \
--user-pool-id eu-central-1_ABC \
--query 'Groups[].GroupName' --output json`
).toString();
const actual = JSON.parse(output);
const check = assertCognitoGroups(actual, ['administrators', 'managers', 'root']);
if (!check.passed) {
console.error(check.error); // "Missing: managers, root"
}assertApiHealth
Verifies an API health check response contains __typename (GraphQL introspection indicator).
function assertApiHealth(response: string | null): CheckResultExample:
const response = execSync('curl -s https://api.example.com/graphql -d \'{"query":"{__typename}"}\'').toString();
const check = assertApiHealth(response);deriveExpectations
Derives expected deployment values from project source-of-truth files. Reads a GraphQL schema to count model types (which become DynamoDB tables), parses an auth stack for Cognito group definitions, and reads infrastructure config for the API domain.
function deriveExpectations(opts: {
schemaPath: string; // Path to GraphQL schema file
authStackPath?: string; // Path to CDK auth stack (optional)
configPath: string; // Path to infrastructure config JSON
}): DeployExpectationsReturns:
interface DeployExpectations {
tableCount: number; // Number of model types (= expected DynamoDB tables)
tableNames: string[]; // Model type names
cognitoGroups: string[]; // Group names found in auth stack
apiDomain: string; // Constructed from config dns.apiSubdomain + dns.zoneName
}Schema parsing rules:
- Counts all
type X {definitions in the GraphQL schema - Excludes:
Query,Mutation,Subscription, types ending inInput, types ending inConnection
Auth stack parsing: Finds Cognito group names via two patterns:
- Direct assignment:
groupName: 'administrators' - Loop pattern:
for (... of ['administrators', 'managers', 'root'])
Example:
const expectations = deriveExpectations({
schemaPath: 'server/src/schema/schema.graphql',
authStackPath: 'cdk/lib/auth-stack.ts',
configPath: 'cdk/config/infrastructure.json',
});
console.log(`Expecting ${expectations.tableCount} DynamoDB tables`);
console.log(`Required Cognito groups: ${expectations.cognitoGroups.join(', ')}`);
console.log(`API domain: ${expectations.apiDomain}`);
// Use with assertTableCount
const tableCheck = assertTableCount(actualCount, expectations.tableCount);Config file format expected:
{
"dns": {
"apiSubdomain": "api",
"zoneName": "example.com"
}
}If dns.apiSubdomain or dns.zoneName are missing, defaults to api.example.com.
summarize
Aggregates an array of check results into a single pass/fail summary.
function summarize(checks: CheckResult[]): {
passed: boolean; // true if all checks passed
total: number; // total number of checks
failed: number; // number of failed checks
errors: string[]; // error messages from failed checks
}Example:
const allChecks = [
...verifyAmplifyOutputs('./amplify_outputs.json'),
...verifyBuildArtifact('./dist'),
assertTableCount(count, expected),
parseEcsStatus(ecsOutput),
];
const result = summarize(allChecks);
if (result.passed) {
console.log(`✅ All ${result.total} checks passed`);
} else {
console.error(`❌ ${result.failed}/${result.total} checks failed:`);
result.errors.forEach(e => console.error(` - ${e}`));
process.exit(1);
}Pipeline Engine
The pipeline engine runs a sequence of gates, where each gate contains one or more checks. Gates execute in order. When a gate fails, the engine takes the configured action (abort, rollback, or alert) and stops.
Concepts
Gate — a named group of checks with a failure action. Example: "Post-Deploy Inventory" gate runs table-count, ecs-status, and cognito-groups checks. If any fail, the action is rollback.
Check executor — an async function that runs a single named check and returns a CheckResult. You register these with the engine.
Execution context — environment information passed to every check executor (environment name, region, project root, a shell exec helper, etc.).
State persistence — the engine saves its state to disk after each gate. If a run is interrupted, it resumes from the last passed gate.
Building Gates
Use buildGates() to generate a gate sequence from an EnvironmentConfig:
import { buildGates, DEFAULT_ENVIRONMENTS } from 'anton-bakker-deploy-engine';
const gates = buildGates(DEFAULT_ENVIRONMENTS.staging);This produces 10 gates in order:
| # | Gate | Checks | On Fail |
|---|------|--------|---------|
| 1 | Preflight | aws-creds, deploy-lock | abort |
| 2 | Code Quality | From preDeployChecks config | abort |
| 3 | Infrastructure Validation | cdk-synth, derive-expectations | abort |
| 4 | Pre-Deploy Safety | backup-tables (if enabled), prepare-migrations (if enabled) | abort |
| 5 | Build | docker-build, frontend-build, post-build-verify | abort |
| 6 | Deploy | cdk-deploy, ecs-update, frontend-deploy, seed-data (if enabled) | rollback |
| 7 | Post-Deploy Inventory | table-count, ecs-status, cognito-groups | rollback |
| 8 | Post-Deploy Configuration | pitr-enabled, billing-mode, https-enforced, s3-not-public | alert |
| 9 | Post-Deploy Functional | From postDeployProbes config | rollback or alert |
| 10 | Project Assertions | manifest-assertions, e2e-tests (if enabled) | alert |
Gates 4 and 10 are conditional — they're skipped if the relevant config flags are off.
You can also build gates manually:
import type { GateDefinition } from 'anton-bakker-deploy-engine';
const gates: GateDefinition[] = [
{
id: 'preflight',
name: 'Preflight',
checks: ['aws-creds'],
onFail: 'abort',
},
{
id: 'deploy',
name: 'Deploy',
checks: ['cdk-deploy'],
onFail: 'rollback',
},
{
id: 'verify',
name: 'Post-Deploy Verification',
checks: ['api-health', 'table-count'],
onFail: 'rollback',
},
];Failure actions:
abort— stop the pipeline, do nothing elserollback— stop the pipeline, signal that a rollback is needed (the engine setsresult: 'rolled-back')alert— stop the pipeline, signal that an alert should be sent
Conditional gates: Add a condition function that receives the EnvironmentConfig. The gate is skipped if it returns false:
{
id: 'backup',
name: 'Backup',
checks: ['backup-tables'],
onFail: 'abort',
condition: (env) => env.backupBefore, // Only run if backups are enabled
}Writing Check Executors
A check executor is an async function that receives the ExecutionContext and returns a CheckResult:
import type { CheckExecutor } from 'anton-bakker-deploy-engine';
const typecheckExecutor: CheckExecutor = async (ctx) => {
const result = ctx.exec('npx tsc --noEmit');
return {
name: 'TypeScript type check',
passed: result !== null,
error: result === null ? 'tsc --noEmit failed' : undefined,
};
};
const tableCountExecutor: CheckExecutor = async (ctx) => {
const output = ctx.exec(`aws dynamodb list-tables --query TableNames --output json`);
const { count } = parseTableCount(output, ctx.prefix);
const expected = ctx.expectations?.tableCount as number ?? 0;
return assertTableCount(count, expected);
};
const apiHealthExecutor: CheckExecutor = async (ctx) => {
const response = ctx.exec(`curl -sf https://${ctx.prefix}.example.com/health`);
return assertApiHealth(response);
};Register executors in a Map<string, CheckExecutor>:
const checks = new Map<string, CheckExecutor>();
checks.set('typecheck', typecheckExecutor);
checks.set('table-count', tableCountExecutor);
checks.set('api-health', apiHealthExecutor);The check name in the map must match the check name referenced in gate definitions.
If a check executor throws an exception, the engine catches it and records a failed CheckResult with the error message.
If a gate references a check name that has no registered executor, the engine records a failed result with "No executor registered for 'name'".
Creating the Execution Context
The ExecutionContext provides environment information to all check executors:
import type { ExecutionContext } from 'anton-bakker-deploy-engine';
import { execSync } from 'child_process';
const ctx: ExecutionContext = {
environment: 'staging', // Full environment name
envShort: 'stg', // Short code for prefixes
prefix: 's20-stg', // Resource name prefix
region: 'eu-central-1', // AWS region
account: '123456789012', // AWS account ID
projectRoot: process.cwd(), // Project root directory
dryRun: false, // If true, no state is persisted
envConfig: DEFAULT_ENVIRONMENTS.staging,
expectations: { // Optional — from deriveExpectations()
tableCount: 124,
cognitoGroups: ['administrators', 'managers'],
},
exec: (cmd) => { // Shell command executor
try {
return execSync(cmd, { encoding: 'utf-8', timeout: 60_000 });
} catch {
return null;
}
},
log: (msg) => process.stdout.write(msg + '\n'),
};Fields:
| Field | Type | Description |
|-------|------|-------------|
| environment | string | Full environment name (development, staging, production) |
| envShort | string | Short code used in resource prefixes (dev, stg, prod) |
| prefix | string | Resource name prefix (e.g. s20-dev) |
| region | string | AWS region |
| account | string | AWS account ID |
| projectRoot | string | Absolute path to project root |
| dryRun | boolean | When true, no state files are written to disk |
| envConfig | EnvironmentConfig | The environment configuration driving this run |
| expectations | Record<string, unknown> | Optional key-value store for derived expectations |
| exec | (cmd: string) => string \| null | Runs a shell command, returns stdout or null on failure |
| log | (msg: string) => void | Logging function |
Running the Pipeline
import { PipelineEngine, buildGates, DEFAULT_ENVIRONMENTS } from 'anton-bakker-deploy-engine';
const envConfig = DEFAULT_ENVIRONMENTS.staging;
const gates = buildGates(envConfig);
const engine = new PipelineEngine({
gates,
checks, // Map<string, CheckExecutor>
ctx, // ExecutionContext
stateDir: './orchestrator', // Optional — defaults to <projectRoot>/orchestrator
});
const report = await engine.run();Constructor options:
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| gates | GateDefinition[] | Yes | — | Ordered list of gates to execute |
| checks | Map<string, CheckExecutor> | Yes | — | Registered check executors |
| ctx | ExecutionContext | Yes | — | Execution context |
| stateDir | string | No | <projectRoot>/orchestrator | Directory for state and report files |
The run() method returns a PipelineReport:
interface PipelineReport {
state: PipelineState;
summary: {
totalGates: number;
passed: number;
failed: number;
skipped: number;
duration: number; // Total milliseconds
failedGate?: string; // ID of the gate that failed
action?: 'abort' | 'rollback' | 'alert'; // Failure action
};
}Handling the result:
const report = await engine.run();
switch (report.state.result) {
case 'passed':
console.log(`✅ All ${report.summary.totalGates} gates passed in ${report.summary.duration}ms`);
break;
case 'failed':
console.error(`❌ Failed at gate: ${report.summary.failedGate} → ${report.summary.action}`);
break;
case 'rolled-back':
console.error(`🔄 Rolled back at gate: ${report.summary.failedGate}`);
// Trigger actual rollback logic here
break;
}State Persistence and Resume
The engine saves its state to <stateDir>/deploy-state-<environment>.json after each gate completes. If a run is interrupted (process crash, network failure, timeout), the next run automatically resumes from the last passed gate.
State file location: orchestrator/deploy-state-staging.json
Resume behaviour:
- Gates that already passed are skipped with a log message
- The failed gate is re-executed
- New gates run normally
Reports: On successful completion, the engine also saves a report to <stateDir>/deploy-reports/<timestamp>-<envShort>.json.
Dry run mode: When ctx.dryRun is true, no state or report files are written. Useful for testing.
Clearing State
To force a fresh run (discard previous state), archive the state file:
PipelineEngine.clearState('./orchestrator', 'staging');This moves deploy-state-staging.json to orchestrator/archive/state-<timestamp>.json. The next engine.run() starts from gate 1.
Default Environments
DEFAULT_ENVIRONMENTS provides pre-built configurations for three environments:
development:
{
approval: 'never',
preDeployChecks: ['typecheck', 'lint', 'server-tests', 'schema-parse', 'merge-markers'],
migrations: true,
seed: { enabled: true, skipCalendar: true },
e2eTests: false,
backupBefore: false,
cdkNagLevel: 'warning',
postDeployProbes: ['api-health', 'table-count', 'ecs-status'],
rollbackOnProbeFailure: false,
}staging:
{
approval: 'never',
preDeployChecks: ['typecheck', 'lint', 'server-tests', 'web-tests', 'schema-parse', 'merge-markers'],
migrations: true,
seed: { enabled: true, skipCalendar: false },
e2eTests: true,
e2eTestSuite: 'web/e2e/migration/',
backupBefore: true,
cdkNagLevel: 'error',
postDeployProbes: ['api-health', 'table-count', 'ecs-status', 'cognito-groups', 'crud-probe'],
rollbackOnProbeFailure: true,
}production:
{
approval: 'broadening',
preDeployChecks: ['typecheck', 'lint', 'server-tests', 'web-tests', 'schema-parse',
'merge-markers', 'schema-backward-compat'],
migrations: true,
seed: { enabled: false },
e2eTests: true,
e2eTestSuite: 'web/e2e/smoke/',
backupBefore: true,
cdkNagLevel: 'error',
postDeployProbes: ['api-health', 'table-count', 'ecs-status', 'cognito-groups',
'crud-probe', 'auth-probe', 's3-probe'],
rollbackOnProbeFailure: true,
}EnvironmentConfig fields:
| Field | Type | Description |
|-------|------|-------------|
| approval | 'never' \| 'broadening' \| 'always' | When to require human approval |
| preDeployChecks | string[] | Check names to run in the Code Quality gate |
| migrations | boolean | Whether to run database migrations |
| seed | { enabled, skipCalendar? } | Whether to seed data after deploy |
| e2eTests | boolean | Whether to run E2E tests |
| e2eTestSuite | string | Path to E2E test suite |
| backupBefore | boolean | Whether to backup DynamoDB tables before deploy |
| cdkNagLevel | 'warning' \| 'error' | CDK Nag severity level |
| postDeployProbes | string[] | Check names to run in the Functional gate |
| rollbackOnProbeFailure | boolean | Whether functional probe failures trigger rollback |
| promotionRequirements | { requiresPassed: string[]; allowAncestor?: boolean } | Optional — environments that must have passed for this commit |
You can extend or override defaults:
const myConfig: EnvironmentConfig = {
...DEFAULT_ENVIRONMENTS.staging,
backupBefore: false, // Skip backups for faster deploys
postDeployProbes: [...DEFAULT_ENVIRONMENTS.staging.postDeployProbes, 'custom-probe'],
};Promotion Gate
The promotion gate prevents deploying a commit to a higher environment unless it has passed all required lower environments. This is configured per environment via promotionRequirements.
How It Works
- Before deploying to staging, the engine reads
deploy-state-development.json - Verifies the state shows
result: 'passed' - Verifies the commit matches (exact, prefix, or git ancestor)
- If any requirement fails → pipeline aborts before any deployment starts
Configuration
Add promotionRequirements to your EnvironmentConfig:
const environments = {
// Development — entry point, no requirements
development: {
...baseConfig,
// No promotionRequirements — any commit can deploy here
},
// Staging — must pass development first
staging: {
...baseConfig,
promotionRequirements: {
requiresPassed: ['development'],
},
},
// Production — must pass staging first
production: {
...baseConfig,
promotionRequirements: {
requiresPassed: ['staging'],
},
},
};Strict mode — require exact commit match (no ancestor commits):
production: {
...baseConfig,
promotionRequirements: {
requiresPassed: ['development', 'staging'],
allowAncestor: false, // exact commit only
},
}Options
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| requiresPassed | string[] | — | Environments that must show result: 'passed' for this commit |
| allowAncestor | boolean | true | Accept ancestor commits via git merge-base --is-ancestor |
Commit Tracking
The engine stores the full git commit hash in PipelineState.commit and a short hash in commitShort. The promotion check compares commits using:
- Exact match — full hash equality
- Prefix match — backward compatible with old state files that stored short hashes
- Ancestor check —
git merge-base --is-ancestor(whenallowAncestor: true)
Default Environments
The DEFAULT_ENVIRONMENTS include promotion requirements:
| Environment | Requires |
|-------------|----------|
| development | None (entry point) |
| staging | development passed |
| production | staging passed |
Using promotionCheck Standalone
import { promotionCheck } from 'anton-bakker-deploy-engine';
const result = promotionCheck({
commit: 'a1b2c3d4e5f6...', // full hash
requirements: { requiresPassed: ['staging'], allowAncestor: true },
stateDir: './orchestrator',
exec: (cmd) => execSync(cmd, { encoding: 'utf-8' }),
});
if (!result.passed) {
console.error(result.error);
// "staging: deployed commit f9e8d7c is not an ancestor of current a1b2c3d"
}Slack Notifier
Optional threaded Slack notifications for pipeline events. Zero dependencies — uses Node.js built-in fetch.
When no SlackNotifier is wired up, the pipeline runs identically — Slack is purely additive.
Setup
import { PipelineEngine, SlackNotifier } from 'anton-bakker-deploy-engine';
const slack = new SlackNotifier({
token: process.env.SLACK_BOT_TOKEN!, // xoxb-...
channel: 'C0123DEPLOY', // Channel ID
});
const engine = new PipelineEngine({ gates, checks, ctx });
engine.on(slack.listener());
await engine.run(); // Threaded Slack messages sent automaticallyThreading
All messages for a single pipeline run are grouped in one Slack thread:
pipeline:start→ creates the thread parent message- Gate failures, bake alarms, rollback triggers → reply in thread
pipeline:end→ final summary in thread
Failure replies are broadcast to the channel (visible outside the thread) so critical issues aren't buried. Success replies stay in the thread to keep the channel clean.
Severity Levels
Each message has a severity that controls its colour and emoji:
| Severity | Colour | Emoji | Used for |
|----------|--------|-------|----------|
| info | 🟢 #28a745 | ℹ️ / 🚀 / ✅ | Pipeline start, pipeline success |
| warning | 🟡 #ffc107 | ⚠️ | Gate failed (abort/alert action), pipeline failed |
| critical | 🔴 #dc3545 | 🚨 | Rollback triggered, bake alarm, pipeline rolled back |
Severity auto-escalates based on the event outcome:
pipeline:endwithresult: 'failed'→ escalates towarningpipeline:endwithresult: 'rolled-back'→ escalates tocritical
Configurable Events
By default, five event types trigger Slack messages. Override per project:
const slack = new SlackNotifier({
token: process.env.SLACK_BOT_TOKEN!,
channel: 'C0123DEPLOY',
config: {
events: {
'pipeline:start': { severity: 'info', enabled: false }, // Silent start
'pipeline:end': { severity: 'info', enabled: true },
'gate:end': { severity: 'warning', enabled: true },
'bake:alarm': { severity: 'critical', enabled: true },
'rollback:trigger': { severity: 'critical', enabled: true },
},
},
});Default configuration:
| Event | Default severity | Enabled | Sends on |
|-------|-----------------|---------|----------|
| pipeline:start | info | ✅ | Every run |
| pipeline:end | info (auto-escalates) | ✅ | Every run |
| gate:end | warning | ✅ | Failure only |
| bake:alarm | critical | ✅ | Alarm fires during bake |
| rollback:trigger | critical | ✅ | Auto-rollback initiated |
Constructor Options
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| token | string | Yes | — | Slack Bot token (xoxb-...) |
| channel | string | Yes | — | Slack channel ID |
| config | SlackMessageConfig | No | All 5 events enabled | Per-event enable/severity |
| broadcastFailures | boolean | No | true | Broadcast warning/critical replies to channel |
| username | string | No | — | Bot username override |
| iconEmoji | string | No | — | Bot icon emoji override |
Error Handling
The notifier is fire-and-forget. If Slack is unreachable, the token is invalid, or the API returns an error, the failure is silently caught. The pipeline is never affected by Slack failures.
Types Reference
All types are exported from the package root.
| Type | Description |
|------|-------------|
| CheckResult | Single pass/fail check with name, expected, actual, error, duration |
| GateDefinition | Gate configuration: id, name, check names, failure action, optional condition |
| GateResult | Gate execution result: status, checks, timing |
| EnvironmentConfig | Environment-specific deployment configuration |
| PipelineState | Full pipeline run state: runId, environment, commit, gates, result |
| PipelineReport | Pipeline state + summary statistics |
| CheckExecutor | (ctx: ExecutionContext) => Promise<CheckResult> — async check function |
| ExecutionContext | Runtime context passed to every check executor |
| DeployExpectations | Output of deriveExpectations(): table count/names, Cognito groups, API domain |
| SlackSeverity | 'info' \| 'warning' \| 'critical' — message severity level |
| SlackEventConfig | Per-event config: { severity: SlackSeverity; enabled: boolean } |
| SlackMessageConfig | Map of pipeline event types to SlackEventConfig |
| SlackNotifierOptions | Constructor options for SlackNotifier |
Publishing
The package is published to npm via scripts/publish.sh.
# Publish (auto-detects tag, auto-bumps patch if version exists)
npm run release:publish
# Publish explicitly to latest tag
npm run release:publish:latest
# Publish to beta tag
NPM_TAG=beta npm run release:publishWhat the publish script does:
- Commits and pushes any uncommitted changes
- Runs tests (
npm test) - Builds (
npm run build) - Fetches the npm automation token from AWS Secrets Manager (
npm/automation-token/anton-bakker-deploy-engine) - If the current version is already published, auto-bumps patch, commits, and pushes
- Publishes to npm with a temporary
.npmrc(token never stored on disk)
Environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| AWS_PROFILE | BeyondAmbition | AWS profile for Secrets Manager access |
| AWS_REGION | eu-west-1 | AWS region for Secrets Manager |
| NPM_SECRET_NAME | npm/automation-token/anton-bakker-deploy-engine | Secrets Manager secret ID |
| NPM_TAG | latest | npm publish tag |
| REGISTRY | https://registry.npmjs.org/ | npm registry URL |
Development
# Install dependencies
npm install
# Run tests
npm test
# Type check (no output)
npx tsc --noEmit
# Build (compiles to dist/)
npm run build
# Lint (zero warnings tolerance)
npm run lintProject structure:
├── src/
│ ├── types.ts # Type definitions
│ ├── verify.ts # Verification functions
│ ├── engine.ts # PipelineEngine class
│ ├── pipeline.ts # Gate builder + default configs
│ └── index.ts # Public exports
├── __tests__/
│ ├── verify.test.ts # 49 tests for verification functions
│ ├── engine.test.ts # 23 tests for engine + pipeline
│ ├── promotion.test.ts # 13 tests for promotion gate
│ ├── slack.test.ts # 17 tests for Slack notifier
│ └── fixtures/ # Test fixtures (schema, config, auth stack)
├── scripts/
│ └── publish.sh # npm publish script
├── eslint.config.js # ESLint flat config
├── tsconfig.json # TypeScript config (strict, ESM, NodeNext)
└── package.jsonTest runner: Vitest. All 103 tests run in ~400ms.
TypeScript config: Strict mode, ESM ("type": "module"), NodeNext module resolution, source maps and declaration maps enabled.
ESLint rules: no-console: error, @typescript-eslint/no-unused-vars: error.
Licence
MIT
