npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@hinw/vdf-parser

v1.1.7

Published

A parser for Valve's KeyValue text file format (VDF)

Downloads

160

Readme

npm version JSR

@hinw/vdf-parser

A simple parser for Valve's KeyValue text file format (VDF) https://developer.valvesoftware.com/wiki/KeyValues. Written in JavaScript (TypeScript).

Support both returning an object OR piping readable stream to the parser streaming out key-value pairs.

Installation

Pick your favorite package manager. For example using npm:

npm install @hinw/vdf-parser

Interfaces

Parse as an object

Parse a file

import { VdfParser } from '@hinw/vdf-parser';

// file content: "key" { "nested_key" "value" }"
const filePath = 'input/sample.vdf';

const parser = new VdfParser();
const result = await parser.parseFile(filePath);

// assert.assertEqual(result, { key: { nested_key: 'value' } });

Parse a read stream

import stream from 'node:stream';
import { VdfParser } from '@hinw/vdf-parser';

const readStream = stream.Readable.from(`"key" { "nested_key" "value" }"`);
const parser = new VdfParser();
const result = await parser.parseStream(readStream);

// assert.assertEqual(result, { key: { nested_key: 'value' } });

Parse a string

import { VdfParser } from '@hinw/vdf-parser';

const input = `"key" { "nested_key" "value" }"`;
const parser = new VdfParser();
const result = await parser.parseText(input);

// assert.assertEqual(result, { key: { nested_key: 'value' } });

Stream interface related

pipe to the parser and build the object from the stream output

import fs from 'node:fs'
import { VdfParser } from '@hinw/vdf-parser';

// file content: "key" { "nested_key" "value" }"
const filePath = 'input/sample.vdf';

const parser = new VdfParser();
const fileStream = fs.createReadStream(filePath)
const parserStream = fileStream.pipe(parser)

for await (const pair of parserStream) {
    // assert.assertEqual(pair, { keyParts: ['key', 'nested_key'], value: 'value' });
}

Condense the pairs to an object using condensePairs or condensePairsAsync

// You can build the object from the stream using async iterator interface:
const fileStream = fs.createReadStream(filePath)
const parserStream = fileStream.pipe(parser)
const result = await parser.condensePairsAsync(parserStream)

// Or you can store the pairs somewhere, and build it afterwards with normal iterator interface:
const fileStream = fs.createReadStream(filePath)
const parserStream = fileStream.pipe(parser)
const pairs = await Array.fromAsync(parserStream)
const result = parser.condensePairs(pairs)

Options

Escape sequence

By default, the parser will handle escape sequence. To disable this behavior, you can set disableEscape to true.

import { VdfParser } from '@hinw/vdf-parser';

const input = `"\\"quoted key\\"" "value"`;
const parser = new VdfParser({ disableEscape: true });
const result = await parser.parseText(input);

// assert.assertEqual(result, { '\\"quoted key\\"': 'value' });

Handling duplicated keys

Separated maps

If the values of the duplicated keys are all maps, they will be merged together

import { VdfParser } from '@hinw/vdf-parser';

const input = `"key" { "nested_key" "value" }" "key" { "nested_key_2" "value" }"`;
const parser = new VdfParser({ useLatestValue: true });
const result = await parser.parseText(input);

// assert.assertEqual(result, { key: { nested_key: 'value', nested_key_2: 'value' } });

Map vs normal value

By default, the parser will use the earliest seen value for the duplicated keys.

import { VdfParser } from '@hinw/vdf-parser';

const input = `"key" { "nested_key" "value" }" "key" "value"`;
const parser = new VdfParser();
const result = await parser.parseText(input);

// assert.assertEqual(result, { key: { nested_key: 'value' } });

However, you can set useLatestValue to true to use the latest seen value instead.

import { VdfParser } from '@hinw/vdf-parser';

const input = `"key" { "nested_key" "value" }" "key" "value"`;
const parser = new VdfParser({ useLatestValue: true });
const result = await parser.parseText(input);

// assert.assertEqual(result, { key: 'value' });

Misc

No type conversion

Since the VDF specification does not contain any type info, it would be a guess work to convert some value to a certain type. To keep this library simple, it would not provide an option to do auto type detection / conversion. In theory if the user is interested in using the values of a VDF they should know the schema of the file well and it is the responsibiliy for the user to convert the types instead of this library doing the guess work.