npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

fsai-atlas

v0.3.9

Published

Simplified SDK for Temporal.io with automatic workflow deployment

Readme

Atlas Node.js Platform (fsai-atlas)

This package provides the Node.js runtime platform for Atlas, composed of:

  • SDK (fsai-atlas) – workflow and activity APIs used by workflow projects.
  • Server – HTTP API + Temporal worker orchestration.
  • CLI (atlas) – packaging, deployment, execution and management of workflows.
  • Infrastructure adapters – storage (filesystem/MinIO) and triggers (webhook/schedule).

It is designed to:

  • Isolate workflows per deployment, environment and team (namespace per useCaseTeamId).
  • Keep SDK versioning decoupled per deployment (each package ships its own SDK).
  • Provide a simple & opinionated deployment model on top of Temporal.

High-Level Architecture

At a high level, the nodejs package is responsible for:

  1. SDK layer (dist/sdk/*)

    • Workflow-side utilities (workflow, workflow-only, workflow-primitives).
    • Activity-side utilities (activity, activities, activities/ai/*, activities/db/*).
    • Types and client helpers.
  2. Runtime server (dist/server.js)

    • Starts:
      • Express HTTP API (WorkflowAPI).
      • Dynamic Temporal workers (DynamicWorker).
      • Webhook and schedule trigger managers.
  3. Storage layer (dist/storage/*)

    • Metadata + package storage for workflow deployments.
    • Filesystem / MinIO backends.
  4. Triggers layer (dist/triggers/*)

    • Webhook trigger manager.
    • Schedule trigger manager.
  5. Notifications (dist/notifications/*)

    • Redis-based pub/sub for hot-reloading workers when new deployments are created.
  6. CLI (dist/cli/*)

    • atlas package, atlas deploy, atlas execute, atlas list, atlas rollback, atlas clean, etc.

SDK: fsai-atlas

Entry points

  • Runtime root: dist/sdk/index.js
  • Workflow-only bundle: dist/sdk/workflow-only.js
  • Workflow helpers: dist/sdk/workflow.js, dist/sdk/workflow-base.js, dist/sdk/workflow-primitives.js
  • Activity helpers: dist/sdk/activity.js, dist/sdk/activities/index.js

Typical usage in a workflow project (TypeScript/JavaScript):

// Workflow side
import { ai } from 'fsai-atlas/workflow';

// Activity side
import { createOpenAIChatActivity } from 'fsai-atlas';

Internally, the SDK exports:

  • Workflow primitives

    • Cloud-friendly wrapper around Temporal workflow APIs.
    • Helpers for sessions, retries and structured logging inside workflows.
  • AI Activities (dist/sdk/activities/ai/*)

    • openai activities (chat, completion, STT, TTS):
      • Handle prompt assembly, streaming, retries and error normalization.
    • Can be called directly from workflows through the SDK.
  • DB Activities (dist/sdk/activities/db/*)

    • mongodb, mysql, postgres connectors.
    • Centralized configuration and connection pooling.
  • Secrets utilities (dist/utils/secrets.js)

    • Base64 encoding/decoding of .env-style secrets.
    • Loading encoded secrets into process.env.

Critical design: workflows never import the full SDK directly. They use the workflow-only entrypoint (fsai-atlas/workflow), which is safe to bundle into the Temporal workflow isolate and avoids accidental inclusion of Node-only code.


CLI: atlas

The CLI is built on top of this package and provides commands to:

  • Package workflows from a project directory.
  • Deploy packages to the Atlas server.
  • Execute workflows remotely or locally.
  • List / rollback / clean / remove deployments.
  • Authenticate against the Atlas control plane.

Authentication

CLI commands that talk to the server are wrapped with protectedCommand and use AuthGuard:

  • Credentials are stored in ~/.atlas/credentials.json.
  • atlas login uses GitHub OAuth (GitHubAuthManager) to obtain a JWT.
  • atlas logout removes local credentials.
  • atlas whoami shows current user and token expiry.

Core commands

Below is a summary of the main commands and what they do.

atlas package

  • Location: src/cli/commands/package.ts.
  • Purpose: Build and bundle a workflow project into a .tgz package + metadata.

Input:

  • Runs in a workflow project directory (containing atlas.config.json).
  • Uses npm run build or tsc (configured in the project) to compile to dist/.

Behavior:

  • Reads atlas.config.json:
    • name, version, description, workflowName, entrypoint, triggers, and useCaseTeamId (required, identifies the team/tenant).
  • Fails fast if useCaseTeamId is missing or empty, guiding you to set a UUID per team before packaging.
  • Produces in .atlas/:
    • name-version.tgz: tarball with dist/ + node_modules (symlink-aware).
    • name-version.json: JSON metadata mirroring atlas.config.json plus:
      • packagedAt, packageFile, optional secrets, and useCaseTeamId.
  • Prints Package Details (local manifest), explicitly stating that Manifest Version comes from atlas.config.json.

atlas deploy

  • Location: src/cli/commands/deploy.ts.
  • Purpose: Send a packaged workflow to the Atlas server and activate it for an environment.

Behavior:

  1. Resolves the API URL from environment or flag (--api).

  2. Loads .atlas/<name>.tgz and its metadata JSON (unless --package is passed).

  3. Reads the package file and encodes it to base64.

  4. Calls POST /api/workflows/deploy with body:

    {
      "name": "...",
      "version": "...",        // from metadata
      "description": "...",
      "code": "<base64-tgz>",
      "entrypoint": "dist/workflows/index.js",
      "secrets": "<base64-env>",
      "environment": "dev|sit|uat|prd",
      "useCaseTeamId": "00000000-0000-0000-0000-000000000001"
    }

    The server uses environment + useCaseTeamId to derive a dedicated Temporal namespace for the deployment:

    atlas-<environment>-<useCaseTeamId>
  5. On success, it:

    • Synchronizes versions:
      • Reads atlas.config.json in cwd and updates version to result.deployment.version if different.
      • Updates the .atlas/<name>.json metadata version accordingly.
    • Configures triggers by calling:
      • POST /api/webhooks/register if triggers.webhook.enabled.
      • POST /api/schedules/register if triggers.schedule.enabled.
  6. Prints Deployment Details, including:

    • ID, Name, Version, Environment, Status, Deployed at.
    • Webhook URL with /env prefix (e.g. /dev/test-atlas-sdk).

atlas execute

  • Location: src/cli/commands/execute.ts.
  • Purpose: Trigger a workflow execution remotely via the server’s /api/workflows/execute.

Behavior:

  • Constructs WorkflowExecutionRequest:

    interface WorkflowExecutionRequest {
      workflowId: string;  // workflow name
      version?: string;    // default: latest
      input?: any;         // JSON
      metadata?: Record<string, string>;
    }
  • Calls POST /api/workflows/execute with the request body.

  • Server resolves the correct deployment and starts a Temporal workflow in the environment-specific task queue (see DynamicWorker section).

atlas execute --local

  • Location: src/cli/commands/execute-local.ts.
  • Purpose: Run the workflow locally, bundling it with a transient Temporal worker and in-process activities.

Behavior:

  • Loads .env from the project.
  • Uses the same bundling flow as the SDK’s local worker, with an alias mapping fsai-atlas/workflow to the local dist/sdk/workflow-only.js.
  • Executes the configured workflowName from atlas.config.json with the provided --input payload.
  • Prints the workflow result (typically an object with status and application-specific payload).

Other management commands

  • atlas list

    • GET /api/workflows/deployments.
    • Prints deployments with Name, Version, Env, Status, Namespace, Deployed At, ID, where Namespace is typically atlas-<env>-<useCaseTeamId> (or default for legacy deployments).
  • atlas rollback <deploymentId>

    • POST /api/workflows/deployments/:id/rollback.
    • Marks the target as active, deactivates other active versions of the same workflow, and reloads workers.
  • atlas clean

    • GET /api/workflows/deployments then either:
      • Deletes deployments via DELETE /api/workflows/deployments/:id, or
      • Only cleans extracted files via POST /api/workflows/deployments/:id/clean-files.
  • atlas remove <deploymentId>

    • Deletes a single deployment via DELETE /api/workflows/deployments/:id.
  • atlas login / logout / whoami

    • Manage CLI authentication using GitHub OAuth.

CLI usage examples

This section shows real command flows using the jack.ai example located at examples/atlas/jack.ai.

All commands below assume the Atlas server is running from the nodejs package root:

cd /home/your-user/coding/atlas-temporal/nodejs
npm run build && npm run start:dev

Example: package and deploy to DEV

From the workflow project directory:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai

# Compile and package the workflow
atlas package

# Deploy to DEV environment
atlas deploy --env dev

What happens:

  • atlas package:
    • Runs npm run build (compiling to dist/).
    • Creates .atlas/test-atlas-sdk.tgz and .atlas/test-atlas-sdk.json.
  • atlas deploy --env dev:
    • Uploads the package to the Atlas server.
    • Server persists metadata and extracts the package under /tmp/atlas-workflows/dev/<deploymentId>/package.
    • The DEV worker task queue test-atlas-sdk-dev is started.
    • CLI prints Deployment Details with the final server-assigned version (e.g. 1.1.0).
    • atlas.config.json in the project directory is automatically updated so its version field matches the server version.

You can verify deployments with:

atlas list

Typical output:

Found 2 deployment(s):

┌────────────────┬─────────┬─────┬──────────┬────────────────────────────────────────┬────────────────────────┬───────────────────────┐
│ Name           │ Version │ Env │ Status   │ Namespace                              │ Deployed At            │ ID                    │
├────────────────┼─────────┼─────┼──────────┼────────────────────────────────────────┼────────────────────────┼───────────────────────┤
│ test-atlas-sdk │ 1.0.0   │ DEV │ inactive │ default                                │ ...                    │ DfVtSgpuGkXs0tX7K13gT │
├────────────────┼─────────┼─────┼──────────┼────────────────────────────────────────┼────────────────────────┼───────────────────────┤
│ test-atlas-sdk │ 1.1.0   │ DEV │ active   │ atlas-dev-00000000-0000-0000-0000-0001 │ ...                    │ 8CZI134Ph_gyZBGvEpLER │
└────────────────┴─────────┴─────┴──────────┴────────────────────────────────────────┴────────────────────────┴───────────────────────┘

Example: execute workflow remotely (DEV)

After a successful deploy to DEV, you can execute the workflow via CLI:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai

atlas execute test-atlas-sdk --input '{}'

What happens:

  • CLI sends POST /api/workflows/execute with:
    • workflowId = "test-atlas-sdk".
    • input = {}.
  • Server resolves the active DEV deployment for test-atlas-sdk.
  • A Temporal workflow is started in the test-atlas-sdk-dev task queue.
  • CLI prints an Execution ID and Run ID, and you can inspect progress via:
atlas logs <Execution ID>

You can also trigger the webhook directly (using the environment-prefixed route registered by the server):

curl --location 'http://localhost:3000/dev/test-atlas-sdk' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer your-secret-token' \
  --header 'X-Atlas-Use-Case-Team-Id: 00000000-0000-0000-0000-000000000001' \
  --data '{}'

This hits the Express handler registered for the DEV environment and delegates to the same Temporal workflow through WebhookManager.

Example: execute workflow locally (no server required)

Local execution is useful while developing workflows and activities, as it does not require Atlas server or MinIO to be running.

From the workflow project directory:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai

atlas execute --local --input '{}'

What happens:

  • CLI loads .env from the project.
  • A temporary Temporal worker is started in-process using the compiled workflow bundle and local activities.
  • The workflow is executed once, and the result (for example, the first assistant message in a chat) is printed to stdout.

Example: promotion across environments with --env "uat to prd"

The atlas deploy command also supports promotion between environments using a pipeline: dev → sit → uat → prd.

Example: promote the currently active UAT deployment to PRD:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai

atlas deploy --env "uat to prd"

What happens internally (promoteDeployment in deploy.ts):

  1. Parses fromEnv = "uat" and toEnv = "prd".

  2. Validates that this follows the pipeline order.

  3. Reads atlas.config.json to get name and useCaseTeamId.

  4. Fetches deployments from /api/workflows/deployments and finds the active test-atlas-sdk deployment in UAT for that same useCaseTeamId.

  5. Downloads its package from /api/workflows/deployments/:id/package (code + secrets).

  6. Calls POST /api/workflows/deploy targeting environment = "prd" and forwards the same useCaseTeamId, so the new deployment runs in:

    atlas-prd-<useCaseTeamId>
  7. Prints Promotion Details indicating from/to environments and version.

The end result is a new active deployment in PRD that is bit-for-bit identical to the UAT deployment and isolated in the team-specific namespace.


HTTP API Layer

Implemented in src/api/workflow-api.ts and compiled to dist/api/workflow-api.js. The main class is WorkflowAPI, instantiated by the server entrypoint.

Core routes

Health

  • GET /health
    • No auth.
    • Returns { status: 'ok', timestamp: ... }.

Deployment management

  • POST /api/workflows/deploy

    • Protected by requireAuth middleware.
    • Validates payload with WorkflowDeploymentSchema.
    • Persists the deployment via DeploymentStorage.saveDeployment.
    • Returns the new WorkflowMetadata object.
    • Triggers:
      • Re-registration of webhook/schedule routes.
      • DynamicWorker.reloadDeployments() for immediate worker startup.
  • GET /api/workflows/deployments

    • Lists all deployments (getAllDeployments).
  • GET /api/workflows/deployments/:id

    • Returns a single deployment by ID or 404.
  • GET /api/workflows/deployments/:id/package

    • Returns the base64-encoded package for a deployment (used by promotions).
  • DELETE /api/workflows/deployments/:id

    • Deactivates or removes a deployment (depending on implementation detail).
  • POST /api/workflows/deployments/:id/clean-files

    • Cleans extracted package files for inactive deployments only.
  • POST /api/workflows/deployments/:id/rollback

    • Activates a specific deployment and deactivates other active versions of the same workflow.
    • Triggers worker reload and re-registration of triggers.

Workflow discovery & execution

  • GET /api/workflows/:name

    • Optionally filtered by ?version=....
    • Returns the matching deployment metadata.
  • POST /api/workflows/execute

    • Accepts a WorkflowExecutionRequest.
    • Resolves the target deployment (by workflowId and optional version).
    • Starts a Temporal workflow via the @temporalio/client with:
      • workflowId generated from the logical workflowId + random suffix.
      • taskQueue = "<deployment.name>-<deployment.environment>".
      • args: [input].
  • GET /api/executions/:executionId

    • Uses TemporalClient.workflow.getHandle(executionId) and describe() to report status.

Schedules API

Implemented in setupScheduleRoutes():

  • POST /api/schedules/register

    • Registers a new schedule with ScheduleManager and persists config.
  • GET /api/schedules/list

    • Returns all known schedules and their metadata.
  • GET /api/schedules/:scheduleId

    • Returns details for a single schedule.
  • POST /api/schedules/:scheduleId/pause

    • Pauses a schedule, optionally with a note.
  • POST /api/schedules/:scheduleId/unpause

    • Resumes a paused schedule.
  • DELETE /api/schedules/:scheduleId

    • Unregisters a schedule and stops future runs.
  • POST /api/schedules/validate-cron

    • Validates a cron expression and returns details (next run times, etc.).

Webhook routes

WorkflowAPI.registerDeploymentTriggers() inspects active deployments and, for each distinct webhook path, it:

  • Builds an environment-specific path:
    • /<env><webhook.path>, e.g. /dev/test-atlas-sdk, /sit/test-atlas-sdk.
  • Registers one Express POST handler per path (per environment).
  • On each request to that path, it:
    • Reloads the current set of active deployments for that path from storage (so new deploys/rollbacks take effect without server restarts).
    • Optionally enforces bearer/API-key auth using the configured secret.
    • Evaluates team configuration:
      • If any deployment for that path has useCaseTeamId set:
        • Requires header X-Atlas-Use-Case-Team-Id.
        • If missing → 400 with a clear error message.
        • If present but no deployment matches that team → 404.
      • If no deployment has useCaseTeamId:
        • Header is optional, but if provided and does not match anything → 404.
    • Selects the matching deployment and creates/uses a Temporal client bound to deployment.namespace (e.g. atlas-dev-<team>, atlas-sit-<team>).
    • Uses session_id from the incoming body (or generates one) to derive a workflowId for the Temporal workflow, enabling conversational sessions.
    • Reuses a running workflow for the same session via signal newMessage, or starts a new workflow if none exists.
    • Waits for workflow result and returns it as HTTP response.

This mechanism powers chat-like bots such as the jack.ai example while properly isolating tenants by useCaseTeamId + Temporal namespace.


Storage Layer

Located under src/storage/* and compiled to dist/storage/*.

DeploymentStorage

Responsibilities:

  • Persist and retrieve deployment metadata and package blobs.
  • Generate version numbers for new deployments (auto-increment when necessary).
  • Maintain status (active, inactive, deprecated).

Backends:

  • Filesystem backend (for local/dev usage) – stores deployments under a local directory.
  • MinIO backend – uses an S3-compatible bucket for storing packages, and local extracted paths under /tmp/atlas-workflows/<env>/<id>/package.

Critical behavior:

  • On saveDeployment:

    • Requires useCaseTeamId and derives a Temporal namespace:

      atlas-<environment>-<useCaseTeamId>
    • Assigns a new version if none provided or if the target environment already has an active version for that same useCaseTeamId.

    • Persists metadata in a durable store (e.g., filesystem JSON/MinIO) including namespace and useCaseTeamId.

    • When listing or resolving deployments, prefers namespaced entries over legacy default-namespace deployments.

  • getDeploymentPackage(id):

    • Returns { code: <base64-tgz>, secrets?: <base64-env> }.
    • Used by DynamicWorker.ensurePackageExtracted and by promotions.

WebhookConfigStorage and ScheduleConfigStorage

  • Store trigger configurations per deployment.
  • Allow WebhookManager and ScheduleManager to rebuild in-memory state on restart.

Triggers Layer

Located in src/triggers/*dist/triggers/*.

WebhookManager

  • Holds the mapping between HTTP routes and workflow deployments.
  • Is integrated tightly with WorkflowAPI.registerDeploymentTriggers().
  • Delegates actual workflow execution to Temporal clients.

ScheduleManager

  • Manages Temporal schedules for workflows.
  • Provides methods to:
    • registerSchedule(workflowId, version, config, input).
    • getSchedules(), getScheduleInfo(id), pauseSchedule, unpauseSchedule, unregisterSchedule.
    • validateCronExpression.

Schedules trigger workflows on a cron-like basis, using the same deployment isolation semantics as webhooks.


Dynamic Worker and Isolation Model

File: src/worker/dynamic-worker.tsdist/worker/dynamic-worker.js.

DynamicWorker is the core abstraction that:

  • Discovers active deployments.
  • Extracts their packages.
  • Starts one Temporal Worker instance per deployment, per environment.
  • Ensures task queue isolation (<name>-<env>).
  • Handles hot-reload when new deployments are created.

Key fields

  • workers: Map<string, Worker> – keyed by deployment ID.
  • connection: NativeConnection – shared Temporal worker connection.
  • clientConnection: Connection – shared client connection for namespace checks and management.
  • namespaceManager – manages Temporal namespaces, ensuring per-deployment namespaces like atlas-<env>-<useCaseTeamId> exist before workers start.
  • storage: DeploymentStorage – source of deployment metadata & packages.
  • notifier?: DeploymentNotifier – optional Redis-based hot-reload channel.

Lifecycle

start()

  1. Connects to Temporal via NativeConnection.
  2. Connects a Connection for namespace operations.
  3. Calls reloadDeployments() to start workers for all active deployments.
  4. If DeploymentNotifier is configured:
    • Subscribes to atlas:deployments Redis channel.
    • On event, fetches the deployment metadata and calls startWorkerForDeployment().
  5. Else, falls back to polling reloadDeployments() every 10 seconds.

reloadDeployments()

  • Fetches all active deployments from DeploymentStorage.
  • For each, calls startWorkerForDeployment(deployment).

startWorkerForDeployment(deployment)

  1. Computes workerId = deployment.id.

  2. If a worker already exists for this workerId, it does nothing (prevents duplicates on repeated reloads).

  3. Stops any old workers for the same deployment.name and deployment.environment:

    • Ensures only one active version per workflow per environment.
  4. Ensures the package is extracted (ensurePackageExtracted):

    • If deployment.packagePath does not exist, decodes the base64 .tgz from DeploymentStorage and extracts into that directory.
  5. Derives workflowsPath from deployment.packagePath + deployment.entrypoint.

  6. Loads secrets (loadSecretsForDeployment) before loading activities, so activities can read environment variables.

  7. Loads activities via loadActivitiesForDeployment.

  8. Constructs a deployment-specific task queue:

    const deploymentTaskQueue = `${deployment.name}-${deployment.environment}`;
  9. Creates a Temporal Worker:

    const worker = await Worker.create({
      connection: this.connection!,
      namespace: deployment.namespace ?? this.namespace,
      taskQueue: deploymentTaskQueue,
      workflowsPath,
      activities,
      identity: `atlas-worker-${deployment.name}-${deployment.version}-${deployment.environment}`,
      bundlerOptions: {
        webpackConfigHook: (config) => { ...alias setup... },
      },
    });
  10. Stores the worker in this.workers and starts worker.run() in the background.

Bundler aliases for workflows

Inside webpackConfigHook:

  • Ensure workflows import the correct, workflow-safe SDK:

    const sdkWorkflowPath = path.join(__dirname, '../sdk/workflow-only.js');
    
    alias['fsai-atlas/workflow'] = sdkWorkflowPath;
    // (Optional legacy) alias['@atlas'] = sdkWorkflowPath;
    
    alias['@temporalio/workflow'] = path.join(
      deployment.packagePath,
      'node_modules/@temporalio/workflow'
    );

This guarantees that:

  • Workflow code uses the embedded workflow-only APIs shipped with this runtime, ensuring consistent behavior across deployments.
  • The worker uses the @temporalio/workflow version from the deployment’s own node_modules, preventing multiple versions of the Temporal workflow runtime from being bundled into the same isolate (which would break private fields).

Loading secrets

loadSecretsForDeployment(deployment):

  • If deployment.secrets (base64) is present:
    • Decodes to text (KEY=VALUE per line).
    • Parses and sets process.env[KEY] = VALUE for each.
    • Allows both activities and workflows (via activities) to read configuration such as API keys and DB credentials.

Loading activities

loadActivitiesForDeployment(deployment):

  • Computes:

    const activitiesPath = path.join(deployment.packagePath, 'dist', 'activities', 'index.js');
  • Installs a minimal module alias only for legacy @atlas imports:

    Module._resolveFilename = function (request, parent, isMain) {
      if (request === '@atlas') {
        return path.join(
          deployment.packagePath,
          'node_modules/fsai-atlas/dist/sdk/index.js'
        );
      }
      return originalResolveFilename.call(this, request, parent, isMain);
    };
  • Clears require cache for activitiesPath and requires it.

  • Restores the original resolver.

  • Logs the exported activity names and returns them as the activities object used by the Temporal worker.

Important: fsai-atlas imports are not aliased here. They resolve naturally to deployment.packagePath/node_modules/fsai-atlas, ensuring each deployment uses its own SDK version as installed by npm install in the workflow project.

Isolation properties

The combination of DynamicWorker + DeploymentStorage yields:

  • Per-deployment task queues:

    • "<workflowName>-<env>" task queues ensure that each environment (DEV, SIT, UAT, PRD) has separate workers and traffic.
  • Per-team Temporal namespaces:

    • Each deployment has a namespace like atlas-<env>-<useCaseTeamId>, isolating execution and visibility between teams/tenants.
  • Hot reload on deploy:

    • When a new deployment becomes active, old workers for the same workflow+env are shut down and their deployment files kept for potential rollback.
  • Per-deployment SDK:

    • Activities resolve fsai-atlas from the deployment’s own node_modules.
    • Workflow bundling uses the runtime’s workflow-only entrypoint but forces Temporal’s workflow runtime (@temporalio/workflow) to match the deployment.
  • Safe secrets handling:

    • Secrets are loaded from deployment metadata into process.env before activities are required, so activities always see the correct configuration for that deployment.

End-to-end example: jack.ai interview flow

This section walks through the complete path of a request in the jack.ai example, from CLI command or HTTP call all the way to Temporal workflows, activities, and logs.

Preconditions

  • Atlas server is running from the nodejs package root:

    cd /home/your-user/coding/atlas-temporal/nodejs
    npm run build && npm run start:dev
  • Temporal, Redis, MinIO and Postgres (if configured) are up via docker compose.

  • You have authenticated via:

    atlas login
  • The jack.ai workflow project exists at examples/atlas/jack.ai with a valid atlas.config.json and candidate.json.

Step 1 – Package and deploy the workflow

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai

atlas package
atlas deploy --env dev

Key effects:

  • atlas package compiles and bundles the project, producing:
    • .atlas/test-atlas-sdk.tgz – the workflow + node_modules.
    • .atlas/test-atlas-sdk.json – metadata including triggers and entrypoint.
  • atlas deploy --env dev uploads the package and calls DeploymentStorage.saveDeployment, which:
    • Assigns or increments the deployment version (e.g. 1.1.0).
    • Persists deployment metadata.
    • Stores the .tgz in MinIO or filesystem.
    • Updates atlas.config.json and .atlas/*.json with the final version.
  • On the server, WorkflowAPI:
    • Persists the deployment.
    • Calls registerDeploymentTriggers() to register /dev/test-atlas-sdk.
    • Triggers DynamicWorker.reloadDeployments().
  • DynamicWorker:
    • Extracts the package into /tmp/atlas-workflows/dev/<id>/package.
    • Loads secrets from deployment metadata into process.env.
    • Loads activities from dist/activities/index.js inside the package.
    • Starts a Temporal worker with task queue test-atlas-sdk-dev.

You can validate the active deployment with:

atlas list

Step 2 – Local dry-run with atlas execute --local

Before or after deploying, you can run the workflow locally without touching the server or MinIO:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute --local --input '{}'

What happens:

  • CLI loads .env.
  • Bundles the workflow using the local dist/ build and aliases fsai-atlas/workflow to the local workflow-only SDK.
  • Starts a temporary Temporal worker in-process, executes the workflow once, and prints a JSON result like:
{
  "status": "in_progress",
  "assistant_response": "<fast_thinking> ...",
  "session_id": "session-...",
  "message_count": 1
}

This is useful for quickly iterating on prompts, activities and the overall conversation logic.

Step 3 – Remote execution via CLI

Once deployed to DEV, you can start a remote execution from the CLI:

cd /home/your-user/coding/atlas-temporal/examples/atlas/jack.ai
atlas execute test-atlas-sdk --input '{}'

Data flow:

  1. CLI sends POST /api/workflows/execute with workflowId = "test-atlas-sdk".
  2. WorkflowAPI:
    • Resolves the active DEV deployment for test-atlas-sdk.
    • Chooses taskQueue = "test-atlas-sdk-dev".
    • Starts a Temporal workflow via @temporalio/client.
  3. DynamicWorker already has a worker polling test-atlas-sdk-dev, so it picks up the workflow task and starts executing the workflow code.

In the server logs you will see entries such as:

  • Worker startup:

    🚀 Starting worker for: test-atlas-sdk v1.1.0
      ✅ Package already extracted
      Workflow: /tmp/atlas-workflows/dev/<id>/package/dist/workflows/index.js
      ✅ Secrets loaded
      Activities: loadCandidateInfo, initializeAssistant, sendMessage, ...
  • Activity-level logs from the example workflow:

    [INFO] [loadCandidateInfo] Loading candidate information { candidateFile: 'candidate.json' }
    [INFO] [loadCandidateInfo] Candidate data loaded { name: 'Matheus Balbino', position: 'Senior Software Engineer' }
    [INFO] [initializeAssistant] Initializing Jack with candidate context { ... }
    [Activity] Starting: openai.chat.gpt-4.1 { workflowId: '...', attempt: 1 }
    [Activity] Success: openai.chat.gpt-4.1 { duration: '...ms' }

You can inspect the execution in Temporal UI at http://localhost:8080 using the Execution ID printed by the CLI.

Step 4 – Remote execution via webhook

The same workflow can be driven purely via HTTP using the webhook registered for each environment (e.g. DEV, SIT, UAT), and routed by team via useCaseTeamId:

curl --location 'http://localhost:3000/dev/test-atlas-sdk' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer your-secret-token' \
  --header 'X-Atlas-Use-Case-Team-Id: 00000000-0000-0000-0000-000000000001' \
  --data '{}'

Data flow:

  1. Express receives the request on /dev/test-atlas-sdk.
  2. WorkflowAPI.registerDeploymentTriggers() handler:
    • Reloads the active deployments for that path from storage.
    • Validates authentication against the configured bearer secret.
    • If any deployment has useCaseTeamId, enforces the X-Atlas-Use-Case-Team-Id header and selects the matching deployment.
    • Uses deployment.namespace (e.g. atlas-dev-<team>, atlas-sit-<team>, atlas-uat-<team>) when creating the Temporal client.
  3. It then:
    • Derives a session_id from the body or generates one.
    • Builds a workflow ID like "test-atlas-sdk-dev-<session_id>".
    • Tries to attach to a running workflow for that session via getHandle(workflowId) and signal newMessage, or starts a new workflow if one does not exist.
  4. Waits for the workflow to produce a result and returns it as JSON.

This mechanism is what turns the jack.ai workflow into a stateful conversational assistant reachable over HTTP, with tenant isolation via Temporal namespaces and useCaseTeamId.


Summary

The nodejs package is the central runtime for Atlas:

  • SDK (fsai-atlas): workflow & activity APIs, AI/DB integrations, workflow-side helpers.
  • CLI (atlas): packaging, deployment, execution, authentication, environment management.
  • Server / HTTP API: deployment lifecycle, execution, webhooks, schedules.
  • Storage: filesystem/MinIO-backed deployment and trigger persistence.
  • Triggers: webhook & schedule orchestration feeding into Temporal.
  • Dynamic worker: strong isolation by deployment and environment with hot reload and per-deployment SDK resolution.