@agilecustoms/envctl
v1.28.3
Published
node.js CLI client for managing environments
Maintainers
Readme
envctl
- npm @agilecustoms/envctl
- npm agilecustoms/packages (admin view)
usage
envctl init -backend-config=key=laxa1986
envctl apply -var-file=versions.tfvars -var="log_level=debug"
envctl deletesetup/update
npm install -g @agilecustoms/envctl # same command for update
envctl --version
npm outdated -g
npm view @agilecustoms/envctl version # show latest version available (without installing)npmjs setup
- Login in npmjs.com
- Create organization "agilecustoms" this will create scope
@agilecustoms(one org => exactly one scope, also scope can be created w/o org) - How to add a package first time?
- Configure Trusted publishing for npm packages
- Navigate to package settings
- Pick GitHub Actions
- Organization or user:
agilecustoms - Repository:
envctl - Workflow filename:
build.yml - Environment name:
release - "Set up connection"
- Organization or user:
- In GH workflow job use
permissions: id-token: writeand release action with inputnpm-publish: true
Authorization
Main use cases:
- create env from a dev machine
- env created automatically per feature branch
- create an ephemeral environment on CI
Originally I (Alex C) have chosen IAM authorization (/ci/deployer on pipeline via OIDC, /developer on dev machine via SSO)
Then (Feb 2026) I reworked it to use API keys
Distribution
Originally I planned to use bash scripts, but it quickly became bulky and hard to maintain.
Then I thought about Node.js - it is available on dev machines and in GitHub actions (namely in Ubuntu runners).
How to distribute it? First I thought about using ncc to bundle in one big .js file
(as I do for publish-s3 and gha-healthcheck) but it will be hard to use on dev machine...
So I ended up publishing this client as a npm package in npmjs
- CI environments can install it via GH action
agilecustoms/envctl - developer will install it globally via
npm install -g @agilecustoms/envctl
terraform init
Terraform init first time (given state configured to be in S3)
- backend state metadata
- terraform checks access to remote state (double-checked, tried to specify the wrong bucket, and it fails)
- basically it tries
s3:ListObjectsV2. But it looks like it does not check access to the file itself
- basically it tries
- create file .terraform/terraform.tfstate with backend config (s3 bucket and key) (it does NOT download a state file itself!)
- so basically if you call
terraform inittwo times with different keys, second time you get error "Backend configuration changed" and ask to migrate state or reconfigure
- terraform checks access to remote state (double-checked, tried to specify the wrong bucket, and it fails)
- modules (can be called individually via
terraform get -update)- download modules, put them in
.terraform/modules - modules are loaded before providers bcz in modules can be more providers!
- download modules, put them in
- providers
- search for the latest provider satisfying conditions in .tf files
- download providers, put them in
.terraform/providers/* - create file
.terraform.lock.hclwith provider versions and hashes
