npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@flowcraft/kafka-adapter

v1.4.2

Published

[![NPM Version](https://img.shields.io/npm/v/@flowcraft/kafka-adapter.svg)](https://www.npmjs.com/package/@flowcraft/kafka-adapter) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Readme

Flowcraft Adapter for Kafka & Cassandra

NPM Version License: MIT

This package provides a distributed adapter for Flowcraft designed for high-throughput environments. It uses Apache Kafka for streaming job processing, Apache Cassandra for scalable and fault-tolerant state persistence, and Redis for high-performance coordination.

Features

  • High-Throughput Execution: Built for demanding workloads by leveraging the performance of Kafka and Cassandra.
  • Streaming Job Processing: Uses Apache Kafka to manage the flow of jobs as a continuous stream of events.
  • Fault-Tolerant State: Leverages Apache Cassandra's distributed architecture to ensure workflow context is highly available and durable.
  • High-Performance Coordination: Uses Redis for atomic operations required for complex patterns like fan-in joins.
  • Workflow Reconciliation: Includes a reconciler utility to detect and resume stalled workflows, ensuring fault tolerance in production environments.

Installation

You need to install the core flowcraft package along with this adapter and its peer dependencies.

npm install flowcraft @flowcraft/kafka-adapter kafkajs cassandra-driver ioredis

Prerequisites

To use this adapter, you must have the following infrastructure provisioned:

  • An Apache Kafka cluster with a topic for jobs.
  • An Apache Cassandra cluster with a keyspace and two tables (one for context, one for status).
  • A Redis instance accessible by your workers (required for the coordination store to handle atomic operations like fan-in joins and distributed locking).

Cassandra Table Schema Example:

-- For context data
CREATE TABLE your_keyspace.flowcraft_contexts (
    run_id text PRIMARY KEY,
    context_data text
);

-- For final status
CREATE TABLE your_keyspace.flowcraft_statuses (
    run_id text PRIMARY KEY,
    status_data text,
    updated_at timestamp
);

Usage

The following example shows how to configure and start a worker.

import { KafkaAdapter, RedisCoordinationStore } from '@flowcraft/kafka-adapter'
import { Client as CassandraClient } from 'cassandra-driver'
import { FlowRuntime } from 'flowcraft'
import Redis from 'ioredis'
import { Kafka } from 'kafkajs'

// 1. Define your workflow blueprints and registry
const blueprints = { /* your workflow blueprints */ }
const registry = { /* your node implementations */ }

// 2. Initialize service clients
const kafka = new Kafka({ brokers: ['kafka-broker:9092'] })
const cassandraClient = new CassandraClient({
	contactPoints: ['cassandra-node:9042'],
	localDataCenter: 'datacenter1',
})
const redisClient = new Redis('YOUR_REDIS_CONNECTION_STRING')

// 3. Create a runtime configuration
const runtime = new FlowRuntime({ blueprints, registry })

// 4. Set up the coordination store
const coordinationStore = new RedisCoordinationStore(redisClient)

// 5. Initialize the adapter
const adapter = new KafkaAdapter({
	runtimeOptions: runtime.options,
	coordinationStore,
	kafka,
	cassandraClient,
	keyspace: 'your_keyspace',
	contextTableName: 'flowcraft_contexts',
	statusTableName: 'flowcraft_statuses',
	topicName: 'flowcraft-jobs', // Optional
	groupId: 'flowcraft-workers', // Optional
})

// 6. Start the worker to connect to Kafka and begin consuming jobs
adapter.start()

console.log('Flowcraft worker with Kafka adapter is running...')

Components

  • KafkaAdapter: The main adapter class that connects to Kafka as a consumer and producer, processes jobs with the FlowRuntime, and sends new jobs to the topic.
  • CassandraContext: An IAsyncContext implementation that stores and retrieves workflow state as a JSON blob in a Cassandra table.
  • RedisCoordinationStore: An ICoordinationStore implementation that uses Redis for atomic operations.
  • createKafkaReconciler: A utility function for creating a reconciler that queries Cassandra for stalled workflows and resumes them.

Reconciliation

The Kafka adapter includes a reconciliation utility that helps detect and resume stalled workflows. This is particularly useful in production environments where workers might crash or be restarted.

Prerequisites for Reconciliation

To use reconciliation, your status table must include status and updated_at fields that track workflow state. The adapter automatically updates these fields during job processing.

Usage

import { createKafkaReconciler } from '@flowcraft/kafka-adapter'

// Create a reconciler instance
const reconciler = createKafkaReconciler({
  adapter: myKafkaAdapter,
  cassandraClient: myCassandraClient,
  keyspace: 'my_keyspace',
  statusTableName: 'flowcraft_statuses',
  stalledThresholdSeconds: 300, // 5 minutes
})

// Run reconciliation
const stats = await reconciler.run()
console.log(`Found ${stats.stalledRuns} stalled runs, reconciled ${stats.reconciledRuns} runs`)

Reconciliation Stats

The reconciler returns detailed statistics:

interface ReconciliationStats {
  stalledRuns: number    // Number of workflows identified as stalled
  reconciledRuns: number // Number of workflows successfully resumed
  failedRuns: number     // Number of reconciliation attempts that failed
}

How It Works

The reconciler queries the status table for workflows with status = 'running' that haven't been updated within the threshold period. For each stalled workflow, it:

  1. Loads the workflow's current state from the context table
  2. Determines which nodes are ready to execute based on completed predecessors
  3. Acquires appropriate locks to prevent race conditions
  4. Sends jobs for ready nodes to the Kafka topic

This ensures that workflows can be resumed even after worker failures or restarts.

Note: The query uses ALLOW FILTERING which may be inefficient on large datasets. For production use, consider adding a secondary index on the status column.

License

This package is licensed under the MIT License.