npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

tokemon

v1.0.1

Published

A Node.js library for reading streamed JSON.

Readme

tokemon

A Node.js library for reading streamed JSON.

npm version GitHub license Tests Linter

📍 Introduction

tokemon is an open source Node.js library written in TypeScript for extracting fields from streamed JSON.

When working with LLMs, a common use case is having them respond with JSON, which is then processed in a subsequent step. This is straightforward when the LLM returns its response all at once. However, streamed responses make things more challenging. Because streamed JSON arrives token-by-token, it remains incomplete and invalid until the stream finishes - so calling JSON.parse() isn’t an option. You may still need a way to extract certrain fields from the JSON stream, even before it is fully complete.

In scenarios where responde times are a driving factor, such as chatbots, you want to deliver answers to the client as quickly as possible. That’s where tokemon comes in. It lets you extract specific fields from tokenwise streamed JSON, enabling you to process data early and, for example, stream relevant values back to the client while the LLM is still generating its remaining output tokens.

Note: tokemon can only extract fields that exist at the top level of the streamed JSON. It cannot correctly extract fields nested inside child objects.

📦 Installation

Install via npm:

npm i --save tokemon

🔨 Usage

tokemon can be easily integrated with common LLM libraries such as Ollama, OpenAI, and Gemini. All you have to do is pass the async iterator returned by the LLM to an extractor's extract() method and provide a mapper function that converts the LLM's response to a string.

The following example shows how to use tokemon with Ollama.

import ollama, { type ChatResponse } from 'ollama';
import { StringExtractor } from 'tokemon';

const response = await ollama.chat({
  model: 'llama3.1',
  messages: [
    {
      role: 'system',
      content: `
      You are a question / answer bot.
      Your task is to answer the given question in max. 3 sentences and identify an appropriate topic.
      Respond in JSON format like this:
      {
        "answer": "<answer>",
        "topic": "<topic>"
      }
      `
    },
    { role: 'user', content: 'How does an LLM work?' }
  ],
  stream: true, // Enable streaming
  format: {
    type: 'object',
    properties: {
      answer: { type: 'string' },
      topic: { type: 'string' }
    }
  }
});

const asyncIter = response[Symbol.asyncIterator]();

// Extract 'answer' field
const extractor = new StringExtractor('answer');

// Mapper to convert ChatResponse to string
const mapper = (res: ChatResponse): string => res.message.content;

for await (const token of extractor.extract(asyncIter, mapper)) {
  process.stdout.write(token);
}

The extract method yields every new token of the target field returned by the LLM. So in this example the value of the field answer is written tokenwise to the console as soon as it is part of the LLM's response.

Of course, you're not limited to JSON generated by an LLM. All you need is an async iterator that yields JSON tokens.

You can find more examples here.

📄 API

The following provides an overview of tokemon's API. The core components are its extractors. All extractors extend the BaseExtractor class, which provides shared properties and methods.

You can find the TypeDoc documentation here.

BaseExtractor

Properties

| Name | Description | Type | Default | | ------ | ------------------------------------------ | -------- | ------- | | buffer | Property containing the streamed JSON code | string | '' |

Methods

extractor.extract(iter, mapper);
  • Parameters:
    • iter:
      • Type AsyncIterator<T>
      • The async iterator of the LLM stream
    • mapper
      • Type (value: T, index: number) => string
      • The mapper function that converts the LLM message to string
  • Returns an AsyncIterable<string> including the extracted tokens

Events

The BaseExtractor class, in turn, extends the EventEmitter class, allowing it to emit events that you can subscribe to.

| Name | Description | | ----------------- | ------------------------------------------ | | FIELD_COMPLETED | Emitted when the target field is completed |

StringExtractor

Properties

| Name | Description | Type | Default | | ----- | --------------------------------------------- | --------------------- | ----------- | | value | Property containing the value of target field | string \| undefined | undefined |

NumberExtractor

Properties

| Name | Description | Type | Default | | ----- | --------------------------------------------- | --------------------- | ----------- | | value | Property containing the value of target field | number \| undefined | undefined |

BooleanExtractor

Properties

| Name | Description | Type | Default | | ----- | --------------------------------------------- | ---------------------- | ----------- | | value | Property containing the value of target field | boolean \| undefined | undefined |

🧩 Contributing

Any contribution is appreciated! See CONTRIBUTING.md

🔑 License

tokemon is released under MIT license.