fsai-atlas
v0.3.9
Published
Simplified SDK for Temporal.io with automatic workflow deployment
Maintainers
Readme
Atlas Node.js Platform (fsai-atlas)
This package provides the Node.js runtime platform for Atlas, composed of:
- SDK (
fsai-atlas) – workflow and activity APIs used by workflow projects. - Server – HTTP API + Temporal worker orchestration.
- CLI (
atlas) – packaging, deployment, execution and management of workflows. - Infrastructure adapters – storage (filesystem/MinIO) and triggers (webhook/schedule).
It is designed to:
- Isolate workflows per deployment, environment and team (namespace per
useCaseTeamId). - Keep SDK versioning decoupled per deployment (each package ships its own SDK).
- Provide a simple & opinionated deployment model on top of Temporal.
High-Level Architecture
At a high level, the nodejs package is responsible for:
SDK layer (
dist/sdk/*)- Workflow-side utilities (
workflow,workflow-only,workflow-primitives). - Activity-side utilities (
activity,activities,activities/ai/*,activities/db/*). - Types and client helpers.
- Workflow-side utilities (
Runtime server (
dist/server.js)- Starts:
- Express HTTP API (
WorkflowAPI). - Dynamic Temporal workers (
DynamicWorker). - Webhook and schedule trigger managers.
- Express HTTP API (
- Starts:
Storage layer (
dist/storage/*)- Metadata + package storage for workflow deployments.
- Filesystem / MinIO backends.
Triggers layer (
dist/triggers/*)- Webhook trigger manager.
- Schedule trigger manager.
Notifications (
dist/notifications/*)- Redis-based pub/sub for hot-reloading workers when new deployments are created.
CLI (
dist/cli/*)atlas package,atlas deploy,atlas execute,atlas list,atlas rollback,atlas clean, etc.
SDK: fsai-atlas
Entry points
- Runtime root:
dist/sdk/index.js - Workflow-only bundle:
dist/sdk/workflow-only.js - Workflow helpers:
dist/sdk/workflow.js,dist/sdk/workflow-base.js,dist/sdk/workflow-primitives.js - Activity helpers:
dist/sdk/activity.js,dist/sdk/activities/index.js
Typical usage in a workflow project (TypeScript/JavaScript):
// Workflow side
import { ai } from 'fsai-atlas/workflow';
// Activity side
import { createOpenAIChatActivity } from 'fsai-atlas';Internally, the SDK exports:
Workflow primitives
- Cloud-friendly wrapper around Temporal workflow APIs.
- Helpers for sessions, retries and structured logging inside workflows.
AI Activities (
dist/sdk/activities/ai/*)openaiactivities (chat, completion, STT, TTS):- Handle prompt assembly, streaming, retries and error normalization.
- Can be called directly from workflows through the SDK.
DB Activities (
dist/sdk/activities/db/*)mongodb,mysql,postgresconnectors.- Centralized configuration and connection pooling.
Secrets utilities (
dist/utils/secrets.js)- Base64 encoding/decoding of
.env-style secrets. - Loading encoded secrets into
process.env.
- Base64 encoding/decoding of
Critical design: workflows never import the full SDK directly. They use the
workflow-onlyentrypoint (fsai-atlas/workflow), which is safe to bundle into the Temporal workflow isolate and avoids accidental inclusion of Node-only code.
CLI: atlas
The CLI is built on top of this package and provides commands to:
- Package workflows from a project directory.
- Deploy packages to the Atlas server.
- Execute workflows remotely or locally.
- List / rollback / clean / remove deployments.
- Authenticate against the Atlas control plane.
Authentication
CLI commands that talk to the server are wrapped with protectedCommand and
use AuthGuard:
- Credentials are stored in
~/.atlas/credentials.json. atlas loginuses GitHub OAuth (GitHubAuthManager) to obtain a JWT.atlas logoutremoves local credentials.atlas whoamishows current user and token expiry.
Core commands
Below is a summary of the main commands and what they do.
atlas package
- Location:
src/cli/commands/package.ts. - Purpose: Build and bundle a workflow project into a
.tgzpackage + metadata.
Input:
- Runs in a workflow project directory (containing
atlas.config.json). - Uses
npm run buildortsc(configured in the project) to compile todist/.
Behavior:
- Reads
atlas.config.json:name,version,description,workflowName,entrypoint,triggers, anduseCaseTeamId(required, identifies the team/tenant).
- Fails fast if
useCaseTeamIdis missing or empty, guiding you to set a UUID per team before packaging. - Produces in
.atlas/:name-version.tgz: tarball withdist/+node_modules(symlink-aware).name-version.json: JSON metadata mirroringatlas.config.jsonplus:packagedAt,packageFile, optionalsecrets, anduseCaseTeamId.
- Prints Package Details (local manifest), explicitly stating that
Manifest Versioncomes fromatlas.config.json.
atlas deploy
- Location:
src/cli/commands/deploy.ts. - Purpose: Send a packaged workflow to the Atlas server and activate it for an environment.
Behavior:
Resolves the API URL from environment or flag (
--api).Loads
.atlas/<name>.tgzand its metadata JSON (unless--packageis passed).Reads the package file and encodes it to base64.
Calls
POST /api/workflows/deploywith body:{ "name": "...", "version": "...", // from metadata "description": "...", "code": "<base64-tgz>", "entrypoint": "dist/workflows/index.js", "secrets": "<base64-env>", "environment": "dev|sit|uat|prd", "useCaseTeamId": "00000000-0000-0000-0000-000000000001" }The server uses
environment+useCaseTeamIdto derive a dedicated Temporal namespace for the deployment:atlas-<environment>-<useCaseTeamId>On success, it:
- Synchronizes versions:
- Reads
atlas.config.jsonincwdand updatesversiontoresult.deployment.versionif different. - Updates the
.atlas/<name>.jsonmetadataversionaccordingly.
- Reads
- Configures triggers by calling:
POST /api/webhooks/registeriftriggers.webhook.enabled.POST /api/schedules/registeriftriggers.schedule.enabled.
- Synchronizes versions:
Prints Deployment Details, including:
ID,Name,Version,Environment,Status,Deployed at.- Webhook URL with
/envprefix (e.g./dev/test-atlas-sdk).
atlas execute
- Location:
src/cli/commands/execute.ts. - Purpose: Trigger a workflow execution remotely via the server’s
/api/workflows/execute.
Behavior:
Constructs
WorkflowExecutionRequest:interface WorkflowExecutionRequest { workflowId: string; // workflow name version?: string; // default: latest input?: any; // JSON metadata?: Record<string, string>; }Calls
POST /api/workflows/executewith the request body.Server resolves the correct deployment and starts a Temporal workflow in the environment-specific task queue (see DynamicWorker section).
atlas execute --local
- Location:
src/cli/commands/execute-local.ts. - Purpose: Run the workflow locally, bundling it with a transient Temporal worker and in-process activities.
Behavior:
- Loads
.envfrom the project. - Uses the same bundling flow as the SDK’s local worker, with an alias
mapping
fsai-atlas/workflowto the localdist/sdk/workflow-only.js. - Executes the configured
workflowNamefromatlas.config.jsonwith the provided--inputpayload. - Prints the workflow result (typically an object with status and application-specific payload).
Other management commands
atlas list- GET
/api/workflows/deployments. - Prints deployments with
Name,Version,Env,Status,Namespace,Deployed At,ID, whereNamespaceis typicallyatlas-<env>-<useCaseTeamId>(ordefaultfor legacy deployments).
- GET
atlas rollback <deploymentId>- POST
/api/workflows/deployments/:id/rollback. - Marks the target as active, deactivates other active versions of the same workflow, and reloads workers.
- POST
atlas clean- GET
/api/workflows/deploymentsthen either:- Deletes deployments via
DELETE /api/workflows/deployments/:id, or - Only cleans extracted files via
POST /api/workflows/deployments/:id/clean-files.
- Deletes deployments via
- GET
atlas remove <deploymentId>- Deletes a single deployment via
DELETE /api/workflows/deployments/:id.
- Deletes a single deployment via
atlas login / logout / whoami- Manage CLI authentication using GitHub OAuth.
CLI usage examples
This section shows real command flows using the jack.ai example located at
examples/atlas/jack.ai.
All commands below assume the Atlas server is running from the
nodejspackage root:cd /home/your-user/coding/atlas-temporal/nodejs npm run build && npm run start:dev
Example: package and deploy to DEV
From the workflow project directory:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
# Compile and package the workflow
atlas package
# Deploy to DEV environment
atlas deploy --env devWhat happens:
atlas package:- Runs
npm run build(compiling todist/). - Creates
.atlas/test-atlas-sdk.tgzand.atlas/test-atlas-sdk.json.
- Runs
atlas deploy --env dev:- Uploads the package to the Atlas server.
- Server persists metadata and extracts the package under
/tmp/atlas-workflows/dev/<deploymentId>/package. - The DEV worker task queue
test-atlas-sdk-devis started. - CLI prints
Deployment Detailswith the final server-assigned version (e.g.1.1.0). atlas.config.jsonin the project directory is automatically updated so itsversionfield matches the server version.
You can verify deployments with:
atlas listTypical output:
Found 2 deployment(s):
┌────────────────┬─────────┬─────┬──────────┬────────────────────────────────────────┬────────────────────────┬───────────────────────┐
│ Name │ Version │ Env │ Status │ Namespace │ Deployed At │ ID │
├────────────────┼─────────┼─────┼──────────┼────────────────────────────────────────┼────────────────────────┼───────────────────────┤
│ test-atlas-sdk │ 1.0.0 │ DEV │ inactive │ default │ ... │ DfVtSgpuGkXs0tX7K13gT │
├────────────────┼─────────┼─────┼──────────┼────────────────────────────────────────┼────────────────────────┼───────────────────────┤
│ test-atlas-sdk │ 1.1.0 │ DEV │ active │ atlas-dev-00000000-0000-0000-0000-0001 │ ... │ 8CZI134Ph_gyZBGvEpLER │
└────────────────┴─────────┴─────┴──────────┴────────────────────────────────────────┴────────────────────────┴───────────────────────┘Example: execute workflow remotely (DEV)
After a successful deploy to DEV, you can execute the workflow via CLI:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute test-atlas-sdk --input '{}'What happens:
- CLI sends
POST /api/workflows/executewith:workflowId = "test-atlas-sdk".input = {}.
- Server resolves the active DEV deployment for
test-atlas-sdk. - A Temporal workflow is started in the
test-atlas-sdk-devtask queue. - CLI prints an
Execution IDandRun ID, and you can inspect progress via:
atlas logs <Execution ID>You can also trigger the webhook directly (using the environment-prefixed route registered by the server):
curl --location 'http://localhost:3000/dev/test-atlas-sdk' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer your-secret-token' \
--header 'X-Atlas-Use-Case-Team-Id: 00000000-0000-0000-0000-000000000001' \
--data '{}'This hits the Express handler registered for the DEV environment and delegates
to the same Temporal workflow through WebhookManager.
Example: execute workflow locally (no server required)
Local execution is useful while developing workflows and activities, as it does not require Atlas server or MinIO to be running.
From the workflow project directory:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute --local --input '{}'What happens:
- CLI loads
.envfrom the project. - A temporary Temporal worker is started in-process using the compiled workflow bundle and local activities.
- The workflow is executed once, and the result (for example, the first assistant message in a chat) is printed to stdout.
Example: promotion across environments with --env "uat to prd"
The atlas deploy command also supports promotion between environments
using a pipeline: dev → sit → uat → prd.
Example: promote the currently active UAT deployment to PRD:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas deploy --env "uat to prd"What happens internally (promoteDeployment in deploy.ts):
Parses
fromEnv = "uat"andtoEnv = "prd".Validates that this follows the pipeline order.
Reads
atlas.config.jsonto getnameanduseCaseTeamId.Fetches deployments from
/api/workflows/deploymentsand finds the activetest-atlas-sdkdeployment in UAT for that sameuseCaseTeamId.Downloads its package from
/api/workflows/deployments/:id/package(code + secrets).Calls
POST /api/workflows/deploytargetingenvironment = "prd"and forwards the sameuseCaseTeamId, so the new deployment runs in:atlas-prd-<useCaseTeamId>Prints Promotion Details indicating from/to environments and version.
The end result is a new active deployment in PRD that is bit-for-bit identical to the UAT deployment and isolated in the team-specific namespace.
HTTP API Layer
Implemented in src/api/workflow-api.ts and compiled to dist/api/workflow-api.js.
The main class is WorkflowAPI, instantiated by the server entrypoint.
Core routes
Health
GET /health- No auth.
- Returns
{ status: 'ok', timestamp: ... }.
Deployment management
POST /api/workflows/deploy- Protected by
requireAuthmiddleware. - Validates payload with
WorkflowDeploymentSchema. - Persists the deployment via
DeploymentStorage.saveDeployment. - Returns the new
WorkflowMetadataobject. - Triggers:
- Re-registration of webhook/schedule routes.
DynamicWorker.reloadDeployments()for immediate worker startup.
- Protected by
GET /api/workflows/deployments- Lists all deployments (
getAllDeployments).
- Lists all deployments (
GET /api/workflows/deployments/:id- Returns a single deployment by ID or 404.
GET /api/workflows/deployments/:id/package- Returns the base64-encoded package for a deployment (used by promotions).
DELETE /api/workflows/deployments/:id- Deactivates or removes a deployment (depending on implementation detail).
POST /api/workflows/deployments/:id/clean-files- Cleans extracted package files for inactive deployments only.
POST /api/workflows/deployments/:id/rollback- Activates a specific deployment and deactivates other active versions of the same workflow.
- Triggers worker reload and re-registration of triggers.
Workflow discovery & execution
GET /api/workflows/:name- Optionally filtered by
?version=.... - Returns the matching deployment metadata.
- Optionally filtered by
POST /api/workflows/execute- Accepts a
WorkflowExecutionRequest. - Resolves the target deployment (by
workflowIdand optionalversion). - Starts a Temporal workflow via the
@temporalio/clientwith:workflowIdgenerated from the logical workflowId + random suffix.taskQueue = "<deployment.name>-<deployment.environment>".args: [input].
- Accepts a
GET /api/executions/:executionId- Uses
TemporalClient.workflow.getHandle(executionId)anddescribe()to report status.
- Uses
Schedules API
Implemented in setupScheduleRoutes():
POST /api/schedules/register- Registers a new schedule with
ScheduleManagerand persists config.
- Registers a new schedule with
GET /api/schedules/list- Returns all known schedules and their metadata.
GET /api/schedules/:scheduleId- Returns details for a single schedule.
POST /api/schedules/:scheduleId/pause- Pauses a schedule, optionally with a note.
POST /api/schedules/:scheduleId/unpause- Resumes a paused schedule.
DELETE /api/schedules/:scheduleId- Unregisters a schedule and stops future runs.
POST /api/schedules/validate-cron- Validates a cron expression and returns details (next run times, etc.).
Webhook routes
WorkflowAPI.registerDeploymentTriggers() inspects active deployments and,
for each distinct webhook path, it:
- Builds an environment-specific path:
/<env><webhook.path>, e.g./dev/test-atlas-sdk,/sit/test-atlas-sdk.
- Registers one Express
POSThandler per path (per environment). - On each request to that path, it:
- Reloads the current set of active deployments for that path from storage (so new deploys/rollbacks take effect without server restarts).
- Optionally enforces bearer/API-key auth using the configured secret.
- Evaluates team configuration:
- If any deployment for that path has
useCaseTeamIdset:- Requires header
X-Atlas-Use-Case-Team-Id. - If missing →
400with a clear error message. - If present but no deployment matches that team →
404.
- Requires header
- If no deployment has
useCaseTeamId:- Header is optional, but if provided and does not match anything →
404.
- Header is optional, but if provided and does not match anything →
- If any deployment for that path has
- Selects the matching deployment and creates/uses a Temporal client bound to
deployment.namespace(e.g.atlas-dev-<team>,atlas-sit-<team>). - Uses
session_idfrom the incoming body (or generates one) to derive aworkflowIdfor the Temporal workflow, enabling conversational sessions. - Reuses a running workflow for the same session via signal
newMessage, or starts a new workflow if none exists. - Waits for workflow result and returns it as HTTP response.
This mechanism powers chat-like bots such as the jack.ai example while
properly isolating tenants by useCaseTeamId + Temporal namespace.
Storage Layer
Located under src/storage/* and compiled to dist/storage/*.
DeploymentStorage
Responsibilities:
- Persist and retrieve deployment metadata and package blobs.
- Generate version numbers for new deployments (auto-increment when necessary).
- Maintain
status(active,inactive,deprecated).
Backends:
- Filesystem backend (for local/dev usage) – stores deployments under a local directory.
- MinIO backend – uses an S3-compatible bucket for storing packages, and
local extracted paths under
/tmp/atlas-workflows/<env>/<id>/package.
Critical behavior:
On
saveDeployment:Requires
useCaseTeamIdand derives a Temporal namespace:atlas-<environment>-<useCaseTeamId>Assigns a new
versionif none provided or if the target environment already has an active version for that sameuseCaseTeamId.Persists metadata in a durable store (e.g., filesystem JSON/MinIO) including
namespaceanduseCaseTeamId.When listing or resolving deployments, prefers namespaced entries over legacy
default-namespace deployments.
getDeploymentPackage(id):- Returns
{ code: <base64-tgz>, secrets?: <base64-env> }. - Used by
DynamicWorker.ensurePackageExtractedand by promotions.
- Returns
WebhookConfigStorage and ScheduleConfigStorage
- Store trigger configurations per deployment.
- Allow
WebhookManagerandScheduleManagerto rebuild in-memory state on restart.
Triggers Layer
Located in src/triggers/* → dist/triggers/*.
WebhookManager
- Holds the mapping between HTTP routes and workflow deployments.
- Is integrated tightly with
WorkflowAPI.registerDeploymentTriggers(). - Delegates actual workflow execution to Temporal clients.
ScheduleManager
- Manages Temporal schedules for workflows.
- Provides methods to:
registerSchedule(workflowId, version, config, input).getSchedules(),getScheduleInfo(id),pauseSchedule,unpauseSchedule,unregisterSchedule.validateCronExpression.
Schedules trigger workflows on a cron-like basis, using the same deployment isolation semantics as webhooks.
Dynamic Worker and Isolation Model
File: src/worker/dynamic-worker.ts → dist/worker/dynamic-worker.js.
DynamicWorker is the core abstraction that:
- Discovers active deployments.
- Extracts their packages.
- Starts one Temporal
Workerinstance per deployment, per environment. - Ensures task queue isolation (
<name>-<env>). - Handles hot-reload when new deployments are created.
Key fields
workers: Map<string, Worker>– keyed by deployment ID.connection: NativeConnection– shared Temporal worker connection.clientConnection: Connection– shared client connection for namespace checks and management.namespaceManager– manages Temporal namespaces, ensuring per-deployment namespaces likeatlas-<env>-<useCaseTeamId>exist before workers start.storage: DeploymentStorage– source of deployment metadata & packages.notifier?: DeploymentNotifier– optional Redis-based hot-reload channel.
Lifecycle
start()
- Connects to Temporal via
NativeConnection. - Connects a
Connectionfor namespace operations. - Calls
reloadDeployments()to start workers for all active deployments. - If
DeploymentNotifieris configured:- Subscribes to
atlas:deploymentsRedis channel. - On event, fetches the deployment metadata and calls
startWorkerForDeployment().
- Subscribes to
- Else, falls back to polling
reloadDeployments()every 10 seconds.
reloadDeployments()
- Fetches all active deployments from
DeploymentStorage. - For each, calls
startWorkerForDeployment(deployment).
startWorkerForDeployment(deployment)
Computes
workerId = deployment.id.If a worker already exists for this
workerId, it does nothing (prevents duplicates on repeated reloads).Stops any old workers for the same
deployment.nameanddeployment.environment:- Ensures only one active version per workflow per environment.
Ensures the package is extracted (
ensurePackageExtracted):- If
deployment.packagePathdoes not exist, decodes the base64.tgzfromDeploymentStorageand extracts into that directory.
- If
Derives
workflowsPathfromdeployment.packagePath+deployment.entrypoint.Loads secrets (
loadSecretsForDeployment) before loading activities, so activities can read environment variables.Loads activities via
loadActivitiesForDeployment.Constructs a deployment-specific task queue:
const deploymentTaskQueue = `${deployment.name}-${deployment.environment}`;Creates a Temporal
Worker:const worker = await Worker.create({ connection: this.connection!, namespace: deployment.namespace ?? this.namespace, taskQueue: deploymentTaskQueue, workflowsPath, activities, identity: `atlas-worker-${deployment.name}-${deployment.version}-${deployment.environment}`, bundlerOptions: { webpackConfigHook: (config) => { ...alias setup... }, }, });Stores the worker in
this.workersand startsworker.run()in the background.
Bundler aliases for workflows
Inside webpackConfigHook:
Ensure workflows import the correct, workflow-safe SDK:
const sdkWorkflowPath = path.join(__dirname, '../sdk/workflow-only.js'); alias['fsai-atlas/workflow'] = sdkWorkflowPath; // (Optional legacy) alias['@atlas'] = sdkWorkflowPath; alias['@temporalio/workflow'] = path.join( deployment.packagePath, 'node_modules/@temporalio/workflow' );
This guarantees that:
- Workflow code uses the embedded
workflow-onlyAPIs shipped with this runtime, ensuring consistent behavior across deployments. - The worker uses the
@temporalio/workflowversion from the deployment’s ownnode_modules, preventing multiple versions of the Temporal workflow runtime from being bundled into the same isolate (which would break private fields).
Loading secrets
loadSecretsForDeployment(deployment):
- If
deployment.secrets(base64) is present:- Decodes to text (
KEY=VALUEper line). - Parses and sets
process.env[KEY] = VALUEfor each. - Allows both activities and workflows (via activities) to read configuration such as API keys and DB credentials.
- Decodes to text (
Loading activities
loadActivitiesForDeployment(deployment):
Computes:
const activitiesPath = path.join(deployment.packagePath, 'dist', 'activities', 'index.js');Installs a minimal module alias only for legacy
@atlasimports:Module._resolveFilename = function (request, parent, isMain) { if (request === '@atlas') { return path.join( deployment.packagePath, 'node_modules/fsai-atlas/dist/sdk/index.js' ); } return originalResolveFilename.call(this, request, parent, isMain); };Clears
requirecache foractivitiesPathand requires it.Restores the original resolver.
Logs the exported activity names and returns them as the activities object used by the Temporal worker.
Important:
fsai-atlasimports are not aliased here. They resolve naturally todeployment.packagePath/node_modules/fsai-atlas, ensuring each deployment uses its own SDK version as installed bynpm installin the workflow project.
Isolation properties
The combination of DynamicWorker + DeploymentStorage yields:
Per-deployment task queues:
"<workflowName>-<env>"task queues ensure that each environment (DEV, SIT, UAT, PRD) has separate workers and traffic.
Per-team Temporal namespaces:
- Each deployment has a namespace like
atlas-<env>-<useCaseTeamId>, isolating execution and visibility between teams/tenants.
- Each deployment has a namespace like
Hot reload on deploy:
- When a new deployment becomes active, old workers for the same workflow+env are shut down and their deployment files kept for potential rollback.
Per-deployment SDK:
- Activities resolve
fsai-atlasfrom the deployment’s ownnode_modules. - Workflow bundling uses the runtime’s
workflow-onlyentrypoint but forces Temporal’s workflow runtime (@temporalio/workflow) to match the deployment.
- Activities resolve
Safe secrets handling:
- Secrets are loaded from deployment metadata into
process.envbefore activities are required, so activities always see the correct configuration for that deployment.
- Secrets are loaded from deployment metadata into
End-to-end example: jack.ai interview flow
This section walks through the complete path of a request in the jack.ai
example, from CLI command or HTTP call all the way to Temporal workflows,
activities, and logs.
Preconditions
Atlas server is running from the
nodejspackage root:cd /home/your-user/coding/atlas-temporal/nodejs npm run build && npm run start:devTemporal, Redis, MinIO and Postgres (if configured) are up via
docker compose.You have authenticated via:
atlas loginThe
jack.aiworkflow project exists atexamples/atlas/jack.aiwith a validatlas.config.jsonandcandidate.json.
Step 1 – Package and deploy the workflow
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas package
atlas deploy --env devKey effects:
atlas packagecompiles and bundles the project, producing:.atlas/test-atlas-sdk.tgz– the workflow + node_modules..atlas/test-atlas-sdk.json– metadata including triggers and entrypoint.
atlas deploy --env devuploads the package and callsDeploymentStorage.saveDeployment, which:- Assigns or increments the deployment version (e.g.
1.1.0). - Persists deployment metadata.
- Stores the
.tgzin MinIO or filesystem. - Updates
atlas.config.jsonand.atlas/*.jsonwith the final version.
- Assigns or increments the deployment version (e.g.
- On the server,
WorkflowAPI:- Persists the deployment.
- Calls
registerDeploymentTriggers()to register/dev/test-atlas-sdk. - Triggers
DynamicWorker.reloadDeployments().
DynamicWorker:- Extracts the package into
/tmp/atlas-workflows/dev/<id>/package. - Loads secrets from deployment metadata into
process.env. - Loads activities from
dist/activities/index.jsinside the package. - Starts a Temporal worker with task queue
test-atlas-sdk-dev.
- Extracts the package into
You can validate the active deployment with:
atlas listStep 2 – Local dry-run with atlas execute --local
Before or after deploying, you can run the workflow locally without touching the server or MinIO:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute --local --input '{}'What happens:
- CLI loads
.env. - Bundles the workflow using the local
dist/build and aliasesfsai-atlas/workflowto the local workflow-only SDK. - Starts a temporary Temporal worker in-process, executes the workflow once, and prints a JSON result like:
{
"status": "in_progress",
"assistant_response": "<fast_thinking> ...",
"session_id": "session-...",
"message_count": 1
}This is useful for quickly iterating on prompts, activities and the overall conversation logic.
Step 3 – Remote execution via CLI
Once deployed to DEV, you can start a remote execution from the CLI:
cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute test-atlas-sdk --input '{}'Data flow:
- CLI sends
POST /api/workflows/executewithworkflowId = "test-atlas-sdk". WorkflowAPI:- Resolves the active DEV deployment for
test-atlas-sdk. - Chooses
taskQueue = "test-atlas-sdk-dev". - Starts a Temporal workflow via
@temporalio/client.
- Resolves the active DEV deployment for
DynamicWorkeralready has a worker pollingtest-atlas-sdk-dev, so it picks up the workflow task and starts executing the workflow code.
In the server logs you will see entries such as:
Worker startup:
🚀 Starting worker for: test-atlas-sdk v1.1.0 ✅ Package already extracted Workflow: /tmp/atlas-workflows/dev/<id>/package/dist/workflows/index.js ✅ Secrets loaded Activities: loadCandidateInfo, initializeAssistant, sendMessage, ...Activity-level logs from the example workflow:
[INFO] [loadCandidateInfo] Loading candidate information { candidateFile: 'candidate.json' } [INFO] [loadCandidateInfo] Candidate data loaded { name: 'Matheus Balbino', position: 'Senior Software Engineer' } [INFO] [initializeAssistant] Initializing Jack with candidate context { ... } [Activity] Starting: openai.chat.gpt-4.1 { workflowId: '...', attempt: 1 } [Activity] Success: openai.chat.gpt-4.1 { duration: '...ms' }
You can inspect the execution in Temporal UI at http://localhost:8080 using
the Execution ID printed by the CLI.
Step 4 – Remote execution via webhook
The same workflow can be driven purely via HTTP using the webhook registered
for each environment (e.g. DEV, SIT, UAT), and routed by team via
useCaseTeamId:
curl --location 'http://localhost:3000/dev/test-atlas-sdk' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer your-secret-token' \
--header 'X-Atlas-Use-Case-Team-Id: 00000000-0000-0000-0000-000000000001' \
--data '{}'Data flow:
- Express receives the request on
/dev/test-atlas-sdk. WorkflowAPI.registerDeploymentTriggers()handler:- Reloads the active deployments for that path from storage.
- Validates authentication against the configured bearer secret.
- If any deployment has
useCaseTeamId, enforces theX-Atlas-Use-Case-Team-Idheader and selects the matching deployment. - Uses
deployment.namespace(e.g.atlas-dev-<team>,atlas-sit-<team>,atlas-uat-<team>) when creating the Temporal client.
- It then:
- Derives a
session_idfrom the body or generates one. - Builds a workflow ID like
"test-atlas-sdk-dev-<session_id>". - Tries to attach to a running workflow for that session via
getHandle(workflowId)and signalnewMessage, or starts a new workflow if one does not exist.
- Derives a
- Waits for the workflow to produce a result and returns it as JSON.
This mechanism is what turns the jack.ai workflow into a stateful
conversational assistant reachable over HTTP, with tenant isolation via
Temporal namespaces and useCaseTeamId.
Summary
The nodejs package is the central runtime for Atlas:
- SDK (
fsai-atlas): workflow & activity APIs, AI/DB integrations, workflow-side helpers. - CLI (
atlas): packaging, deployment, execution, authentication, environment management. - Server / HTTP API: deployment lifecycle, execution, webhooks, schedules.
- Storage: filesystem/MinIO-backed deployment and trigger persistence.
- Triggers: webhook & schedule orchestration feeding into Temporal.
- Dynamic worker: strong isolation by deployment and environment with hot reload and per-deployment SDK resolution.
