npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@lapidist/dtif-parser

v1.0.6

Published

Canonical parser and runtime for Design Token Interchange Format (DTIF) documents.

Downloads

873

Readme

@lapidist/dtif-parser

Canonical parser and runtime for the Design Token Interchange Format (DTIF). The package provides the reference pipeline for loading, validating, normalising, and resolving DTIF documents while emitting structured diagnostics for tooling and automation workflows.

Documentation: Using the DTIF parser

Installation

npm install @lapidist/dtif-parser

The package targets modern Node runtimes (v22+) and is published as a native ESM module.

Usage

import { parseDocument } from '@lapidist/dtif-parser';

const result = await parseDocument('tokens.json');

for (const diagnostic of result.diagnostics) {
  console.error(`${diagnostic.severity}: ${diagnostic.message}`);
}

const resolved = result.resolver?.resolve('#/color/brand/primary');
console.log(resolved?.value);

To flatten tokens, collect metadata, and normalise diagnostics in a single step, use the parseTokens helper. It loads the document, builds the dependency graph, and returns resolved token snapshots alongside a flattened view of the document.

import { parseTokens } from '@lapidist/dtif-parser';

const { flattened, metadataIndex, resolutionIndex, diagnostics } = await parseTokens('tokens.json');

for (const token of flattened) {
  console.log(token.pointer, token.value);
}

Pass onDiagnostic to observe parser diagnostics as they are produced and warn to intercept non-fatal issues. Both callbacks receive domain DiagnosticEvent objects, allowing you to format or surface them immediately without waiting for the promise to resolve.

await parseTokens('tokens.json', {
  onDiagnostic: (diagnostic) => {
    console.error(diagnostic.message);
  },
  warn: (diagnostic) => {
    console.warn('[warn]', diagnostic.message);
  }
});

Provide a TokenCache implementation, such as the built-in InMemoryTokenCache, to reuse flattening and resolution results across runs or for synchronous parsing with parseTokensSync when your inputs are already available in memory.

Create a session with createSession to reuse caches, install custom document loaders, register plugins, or parse multiple collections with shared state.

Document loader limits

The built-in DefaultDocumentLoader enforces a maximum document size to protect tooling from accidentally loading unbounded payloads. The default limit is 5 MiB. Provide the maxBytes option when constructing a loader or session to apply a stricter, positive byte threshold:

import { createSession, DefaultDocumentLoader } from '@lapidist/dtif-parser';

const session = createSession({
  loader: new DefaultDocumentLoader({ maxBytes: 256 * 1024 })
});

Only finite, positive numbers are accepted. Any non-positive value (such as 0, negative numbers, NaN, or Infinity) is rejected with a RangeError at loader construction. When a payload exceeds the active limit, loading fails with a DocumentLoaderError whose reason is MAX_BYTES_EXCEEDED and whose limit property reflects the enforced cap.

When enabling HTTP(S) loading, configure httpAllowedHosts to restrict which hosts may be fetched and httpTimeoutMs to bound remote requests. Requests to non-allow-listed hosts fail with a DocumentLoaderError whose reason is HTTP_HOST_NOT_ALLOWED, and hung requests are aborted with a TimeoutError driven by the loader's timeout.

Each pipeline stage emits domain DiagnosticEvent objects instead of throwing. Results aggregate every diagnostic (including cache hits) so tooling can stream warnings via onDiagnostic/warn hooks, persist them for later inspection, or format them with formatDiagnostic.

Node adapter

For Node-based tooling, import the bundled adapter to read DTIF token files from disk with extension validation, formatted diagnostics, and ready-to-use token documents:

import { parseTokensFromFile, readTokensFile } from '@lapidist/dtif-parser/adapters/node';

try {
  const result = await parseTokensFromFile('tokens/base.tokens.json', {
    onWarn: (message) => console.warn(message)
  });
  console.log(result.flattened.length);
} catch (error) {
  // DtifTokenParseError exposes the normalised diagnostics for reporting
}

const document = await readTokensFile('tokens/base.tokens.json');

Architecture overview

  • createSession coordinates the loader, schema guard, normaliser, graph builder, and resolver for each request. Sessions keep caches and plugins in sync across parses.
  • Domain caches receive RawDocumentIdentity keys and ensure decoded bytes, AST snapshots, and flattened token artefacts can be reused safely between runs.
  • Diagnostic events surface from every stage and persist in token cache entries so warm parses provide the same visibility as cold runs.
  • Helper APIs (parseTokens, parseTokensSync, createMetadataSnapshot, and createResolutionSnapshot) layer on snapshot builders without bypassing the session lifecycle.

Development

Command line interface

The workspace publishes a dtif-parse binary for quick inspection and CI pipelines:

dtif-parse tokens/base.tokens.json --resolve "#/color/brand/primary"

Use dtif-parse --help for the full list of options and output formats.

License

MIT © Lapidist