npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

jalex

v0.3.0

Published

Just Another LEXer is a lexical analyzer compatible with Jison.

Downloads

7

Readme

Jalex

Just Another Lexer is a jison-compatible lexical analyzer with streaming capability.

Basic Usage

Create an instance of Jalex and add rules to watch for on the input.

var Jalex = require("../jalex");
var lex = new Jalex();

// match tokens
lex.addRule(/[A-Za-z_]\w+/, (match) => {
    return 'ID';
});

// ignore whitespace
lex.addRule(/\s+/, () => { });

// interact using Node Streaming interface
lex.pipe(process.stdout);
lex.write("  token");

Jalex will execute the handler for the longest matching rule.

addRule

The addRule function accepts a regular expression and a callback. Both arguments are required

As Jison Lexer

Jalex can be used as a custom lexer for Jison.

var Jalex = require('jalex');
var lex = new Jalex();

lex.addRule(/[A-Za-z_]\w+/, (match) => {
    return 'ID';
});
lex.addRule(/\s+/, () => { });

var Parser = require('jison').Parser;
var parser = new Parser({
    bnf: {
        S: ["ID", "return 'token';"]
    }
});

// set Jalex as the lexer for Jison
parser.lexer = lex;
var result = parser.parse("   token");

Setting yytext

The this argument of the addRule callback will be the current instance of the lexer.

The yytext property can be set on the lexer to forward a value to Jison.

lex.addRule(/\w+/, function(match) {
    this.yytext = match;
    return 'ID';
});

var parser = new Parser({
    bnf: {
        S: ["ID", "return yytext;"]
    }
});

Source mapping with yyloc

As the lexer consumes input, the yyloc object on the Jalex instance will be updated to indicate the position of the token. It has the following values to assist with source mapping:

  • first_index: the index in the input file where the current match starts
  • last_index: the index in the input file where the current match ends
  • first_line: the line on which the current match begins
  • last_line: the line on which the current match ends
  • first_column: the index on the first line where the current match begins
  • last_column: the index on the last line where the current match ends

The yyloc values should not be altered as the values are incremental rather than absolute.

Special tokens

Jalex responds to and returns for some special tokens: EOF and REJECT

EOF

Jalex returns the EOF token when it reaches the end of the input.

REJECT

If the longest matching rule returns a REJECT token then Jalex will use the next longest matching rule and continue in that fashion until a non-REJECT token is returned by a handler, nothing is returned (empty rule), or no more rules match the input. In the last case lexing continues at the next input character.

var Jalex = require("../jalex");
var lex = new Jalex();

lex.addRule(/\s+/, () => { });
lex.addRule(/\w+/, function(id) {
    return 'ID';
});

// look for 'simple example' but reject to allow 'simple' and 'example'
// to match as IDs
lex.addRule(/simple example/, () => {
    console.log("\n\'simple example\' found.");
    return 'REJECT';
});

lex.pipe(process.stdout);
lex.write("a simple example word capture");