npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

torken

v1.0.1

Published

Compact, binary-encoded and encrypted data containers with built-in validation for integrity and expiration.

Readme

NPM package Node Version WebAssembly TORKEN

Torkens are compact, binary-encoded, and encrypted data containers with built-in validation for integrity and expiration. The goal? Store as much data as possible, as securely as possible, in the shortest possible tokens, while keeping serialization fast. Therefore, the entire core is written purely in C++.

Primarily designed for web applications (e.g., authentication tokens, access links, verification links), Torken can also be used for other secure data applications, such as encrypting game saves or mapping file permissions.

Try it out!

Usage

Node.js

// ES6
import { Torken } from "torken";
// CommonJS
const { Torken } = require("torken");

// Module resolution, if supported
import Torken from "torken/torken";

Browser / WebAssembly

When using bundlers like Vite.js, the WASM variant can be imported and used:

import { Torken } from "torken/browser";

In dev mode, Vite requires to disable dependency optimization for this module to work properly. To do this, include the following snippet in your vite.config.ts:

optimizeDeps: {
    exclude: ['torken']
}

How It Works

Instead of using common serialization formats like JSON, Torkens rely on a typed, binary serialization system. This drastically reduces token length while enabling faster and more stable serialization and validation. The general, un-scrambled binary structure can be seen in the example displayed above, which happens to be the exact representation of the playground torken.

Since the core is written in C++ for speed and reliability, this library relies on transport layers to be fully functional in JavaScript environments. While Node.js supports native modules, the browser version runs via WebAssembly.

One drawback is that the current WASM file is relatively large. This is because the currently used ChaCha20 or AES256-GCM encryption depends on OpenSSL, which must be bundled with the WASM build, whereas the native Node.js version can utilize the system’s OpenSSL library.

And yes - the IAT timestamp is stored as a 32-bit integer, limiting start times to the year 2106. But realistically, I don't plan to maintain this from the grave, nor will I ignore improvements for the next 80+ years—so we’re good for now.

Integrity & Security

Unlike JWTs, the payload in a Torken is fully encrypted. Only metadata remains readable (with restrictions). Currently, ChaCha20 and AES256-GCM are the only supported encryption algorithms.

Once the header is built and the payload is serialized, the final encryption process involves:

  1. A 64-bit checksum is generated from meta and payload
  2. The payload is encrypted using ChaCha20 / AES256-GCM.
  3. The byte stream is scrambled with a key/seed.
  4. The final stream is base-encoded using a flexible alphabet.

The last two steps might seem unnecessary, but because the scrambler operates on aseed and the base converter respects alphabet order, they add an extra layer of security on top of the encryption.

API Reference

The following reference shows usage examples for the Node.js version. Examples for the browser / WASM version can be found on the example page.

The Tokenizer

Creating a Torken is as simple as importing the library and generating one. You can either use the global singleton or create an independant tokenizer instance.

import Torken from "torken/torken";

const SECRET_KEY: string = "<ENCRYPTION_SECRET>";
const TOKEN_IDENTIFIER: Buffer = Buffer.alloc(12).fill(0);

const myTorken = Torken.encrypt(payload, SECRET_KEY, TOKEN_IDENTIFIER);
const decrypted = Torken.decrypt(myTorken, SECRET_KEY);

// If you want to use a custom instance
const instance = Torken.newInstance();
// ...do similar as you would with "Torken"

By default, encryption uses expiresIn=0, meaning the generated Torken never expires.

Key Resolvers

In some cases, the encryption key may depend on the identifier (e.g., user-specific encryption, versioning). To accommodate this, you can provide a resolver function instead of a static string key.

By default, this process is synchronous. However, if resolving the key involves asynchronous operations (such as database queries, API requests, or file I/O), the async variant can be used instead.

/** SYNC resolver */
const result1 = Torken.decrypt(torken, (identifier: Buffer): string => {
    const key: string = KeyStorage.fetch(identifier) ?? FALLBACK;
    return key;
});


/** ASYNC resolver */
async function resolver(identifier: Buffer): string {
    const db = client.db("my_service");
    const collection = db.collection("user_secrets");

    const userId = new ObjectId(identifier);
    const result = collection.findOne({_id: userId});
    if (!result) return FALLBACK;

    return result.key;
}
const result2 = Torken.decryptAsync(torken, resolver);

Customizing the Generator

If you want to use a different alphabet, modify the scrambler key, or define custom validity constraints, you can pass an optional options object when encrypting and/or decrypting.

Keep in mind that the alphabet and scrambler must match during decryption. If they do not, the decryption process will fail.

Torken.setAlphabet("ABCDEFGabcdefg1234567");                              // GLOBAL
Torken.setScrambler("SCRAMBLE_BRAMBLE_yeahyeahyeah");                     // GLOBAL

const myTorken = Torken.encrypt(payload, SECRET_KEY, TOKEN_IDENTIFIER, { 
    validFrom: new Date(),
    expiresIn: 3600,
    alphabet: "0123456789abcdef",                                         // LOCAL
    scrambler: "Different scrambler key"                                  // LOCAL
});

const decrypted = Torken.decrypt(myTorken, SECRET_KEY, { 
    alphabet: "0123456789abcdef",                                         // LOCAL
    scrambler: "Different scrambler key"                                  // LOCAL
});

The Serializer

The Torken tokenizer utilizes a custom-built binary serializer that converts various input types into a byte stream (and back). Even though the C++ core can only work with fixed data types, all transport target implementations follow the same serialization protocol, ensuring compatibility across different platforms.

The serializer is exposed as part of the module, allowing users to serialize any payload outside of torken usage if desired. While most of the type detection occurs at the module's transport layer, numeric type detection is handled directly by the C++ core.

Arrays and objects are serialized as fixed-length lists with optional keys, enabling the Torken serializer to handle virtually any data structure, including nested arrays, objects, and buffers without any noticeable increase in data size.

import Serializer from "torken/serializer";

/** SERIALIZE **/
const a1 = Serializer.serialize("Hello from a string");
const a2 = Serializer.serialize(120); // this will serialize to int8_t
const a3 = Serializer.serialize([1, 2, [3, 4, "5", 6n], {nested: "yeah"}])

/** DESERIALIZE **/
const b1 = Serializer.deserialize(a1); // "Hello from a string"
const b2 = Serializer.deserializeDetailed(a2); // { type: SerialType.Int8, data: 120 }
const b3 = Serializer.deserialize(a3); // [1, 2, [3, 4, "5", 6n], {nested: "yeah"}]

/** ANY BUFFER **/
const c1 = Buffer.from([4, 0, 0, 204, 65]);
const c2 = Serializer.deserialize(c1); // => 25.5

Custom Serializers

Sometimes, you may need to serialize a class instance and later restore it. Instead of storing raw JSON strings — which would unnecessarily inflate the token size — you can register custom types with the serializer.

Custom types allow the serializer to automatically invoke the corresponding serialization and deserialization functions whenever a matching object type is detected. This gives you full control over how objects are serialized, while still enabling the tokenizer to work seamlessly with your class instances, eliminating the need to manually parse raw data.

The example on the right utilizes the built-in torken serializer for serialization. However, any class-to-buffer conversion algorithm (and vice versa) would work just as well, since this is essentially what the internal serializer does.

import Serializer from "torken/serializer";

class Client {
    private sales: string[];
    constructor(
        public name: string;
        public age: number;
    ) {}

    public serialize(): Buffer {
        return Serializer.serialize([
            this.name,
            this.age
        ])
    }

    public static fromBuffer(buf: Buffer): Client {
        const P = Serializer.deserialize(buf)
        return new Client(...P);
    }
}

Serializer.registerCustomType(
    Client,
    (cli) => cli.serialize(),
    (buf) => Client.fromBuffer(buf)
)

BaseX-Encoder

The BaseX encoder used by the tokenizer is also available for direct use. This encoder was inspired by the base-x module and has been reimplemented in C++ for improved performance and compatibility with the torken core. Check the module’s description for more details on the underlying algorithm.

import BaseX from "torken/basex";

const baseEncoder = new BaseX("0123456789abcdef");
const encoded: string = baseEncoder.encode(Buffer.from([10, 255, 225, 35]));
const decoded: Buffer = baseEncoder.decode("affe123");

// Update the alphabet
baseEncoder.setAlphabet("AXu")

Buffer Scrambler

The scrambler mechanism used by Torken is also exposed for direct use. It works by generating a SHA-256 hash-based byte address LUT from a seed (key) and remapping the bytes accordingly. Since this is a reversible process, using the same key allows previously scrambled data to be unscrambled.

NOTICE: Due to memory access limitations in WASM / emscripten, the browser version behaves differently from the Node.js version.
While the Node variant scrambles a Buffer in place and returns a reference to the same object, the WASM version instead returns a new copy, leaving the input buffer untouched. This is unfortunate as it increases memory usage, but at the moment, there’s no workaround I know for this.

import Scrambler from "torken/scrambler";

const buffer = Buffer.from([1, 2, 3, 4, 5, 6, 7]);

/*! IN-PLACE OPERATIONS !*/

Scrambler.scramble(buffer, "ANY_secret_KEY"); 
// "buffer" is now [1, 3, 6, 5, 2, 4, 7]

Scrambler.unscramble(buffer, "ANY_secret_KEY"); 
// "buffer" is now [1, 2, 3, 4, 5, 6, 7]

TODO

  • [ ] Ensure cross-platform compatibility (excessive testing on various machines required for that)
  • [ ] Implement crypto library to abandon OpenSSL dependency (useful for WASM build, because roughly 85% of the .wasm file consists of OpenSSL-Links...)

License

MIT License

Copyright (c) 2025, Lukas A. Schopf // thelaumix productions

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.