npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

franz-kafka

v0.7.5

Published

Kafka Client http://incubator.apache.org/kafka/

Downloads

22

Readme

franz-kafka

A node client for Kafka

Example

var Kafka = require('franz-kafka')

var kafka = new Kafka({
	zookeeper: 'localhost:2181',
	compression: 'gzip',
	queueTime: 2000,
	batchSize: 200,
	logger: console
})

kafka.on('connect', function () {

	// topics are Streams
	var foo = kafka.topic('foo')
	var bar = kafka.topic('bar')

	// consume with a pipe
	foo.pipe(process.stdout)

	// or with the 'data' event
	foo.on('data', function (data) { console.log(data) })

	// produce with a pipe
	process.stdin.pipe(bar)

	// or just write to it
	bar.write('this is a message')

	// resume your consumer to get it started
	foo.resume()

	// don't forget to handle errors
	foo.on('error', function (err) { console.error("STAY CALM") })

	}
)

To test the example first get kafka running. Follow steps 1 and 2 of the quick start guide

Then you can run node example.js to see messages getting produced and consumed.


API

Kafka

new

var Kafka = require('franz-kafka')

var kafka = new Kafka({
	brokers: [{              // an array of broker connection info
		id: 0                  // the server's broker id
		host: 'localhost',
		port: 9092
	}],

	// producer defaults
	compression: 'none',     // default compression for producing
	maxMessageSize: 1000000, // limits the size of a produced message
	queueTime: 5000,         // milliseconds to buffer batches of messages before producing
	batchSize: 200,          // number of messages to bundle before producing

	// consumer defaults
	minFetchDelay: 0,        // minimum milliseconds to wait between fetches
	maxFetchDelay: 10000,    // maximum milliseconds to wait between fetches
	maxFetchSize: 300*1024,  // limits the size of a fetched message

	logger: null             // a logger that implements global.console (for debugging)
})
brokers

An array of connection info of all the brokers this client can communicate with

compression

The compression used when producing to kafka. May be, 'none', 'gzip', or 'snappy'

maxMessageSize

The largest size of a message produced to kafka. If a message exceeds this size, the Topic will emit an 'error'. Note that batchSize affects the size of messages because batches of messages are bundled as individual messages.

queueTime

The time to buffer messages for bundling before producing to kafka. This option is combined with batchSize. Whichever comes first will trigger a produce.

batchSize

The number of messages to bundle before producing to kafka. This option is combined with queueTime. Whichever comes first will trigger a produce.

minFetchDelay

The minimum time to wait between fetch requests to kafka. When a fetch returns zero messages the client will begin exponential backoff between requests up to maxFetchDelay until messages are available.

maxFetchDelay

The maximum time to wait between fetch requests to kafka after exponential backoff has begun.

maxFetchSize

The maximum size of a fetched message. If a fetched message is larger than this size the Topic will emit an 'error' event.

connect

Connects to the Kafka cluster and runs the callback once connected.

kafka.connect(function () {
	console.log('connected')
	//...
})

topic

Get a Topic for consuming or producing. The first argument is the topic name and the second are the topic options.

var foo = kafka.topic('foo', {
	// default options
	minFetchDelay: 0,      // defaults to the kafka.minFetchDelay
	maxFetchDelay: 10000,  // defaults to the kafka.maxFetchDelay
	maxFetchSize: 1000000, // defaults to the kafka.maxFetchSize
	compression: 'none',   // defaults to the kafka.compression
	batchSize: 200,        // defaults to the kafka.batchSize
	queueTime: 5000,       // defaults to the kafka.queueTime
	partitions: {
		consume: ['0-0:0']   // array of strings with the form 'brokerId-partitionId:startOffset'
		produce: ['0:1']     // array of strings with the form 'brokerId:partitionCount'
	}
})
partitions

This structure describes which brokers and partitions the client will connect to for producing and consuming.

####### consume

An array of partitions to consume and what offset to begin consuming from in the form of 'brokerId-partitionId:startOffset'. For example broker 2 partition 3 offset 5 is '2-3:5'

####### produce

An array of brokers to produce to with the count of partitions in the form of 'brokerId:partitionCount'. For example broker 3 with 8 partitions is '3:8'

events

connect

Fires when the client is connected to a broker.

Topic

A topic is a Stream that may be Readable for consuming and Writable for producing. Retrieve a topic from the kafka instance.

var topic = kafka.topic('a topic')

pause

Pause the consumer stream

resume

Resume the consumer stream

destroy

Destroy the consumer stream

setEncoding

Sets the encoding of the data emitted by the data event

write

Write a message to the topic. Returns false if the message buffer is full.

end

Same as write

pipe

Pipe the stream of messages to the next Writable Stream

events

data

Fires for each message. Data is a Buffer by default or a string if setEncoding was called

drain

Fires when the producer stream can handle more messages

error

Fires when there is a produce or consume error

License

BSD