npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ultra-tiny-compiler

v0.0.1

Published

Ultra Tiny Compiler

Downloads

6

Readme

Ultra Tiny Compiler

This is Ultra Tiny Compiler for any C-like expression into lisp.
If remove all comments, source code will be less then <90 lines of actual code. By the way, you are viewing source code itself (yes, this readme file also is source code). It's written in literate coffescript and fully tested Build Status and published on npm. You can install it via npm install ultra-tiny-compiler.

What this compiler can do?

Exapmles

| Input | Output | |---------------------|------------------------| | 1 + 2 * 3 | (+ 1 (* 2 3)) | | 1 + exp(i * pi) | (+ 1 (exp (* i pi))) | | pow(1 + 1 / n, n) | (pow (+ 1 (/ 1 n)) n)|

Theory

We begin by describing grammar of C-like expressions, more precisely context-free grammar. A context-free grammar has four components:

  1. A set of terminal symbols (tokens).
  2. A set of nonterminals (syntactic variables).
  3. A set of productions, where each production consists of a nonterminal, called the head or left side of the production, an arrow, and a sequence of terminals and/or nonterminals, called the body or right side of the production.
  4. A designation of one of the nonterminals as the start symbol.

For notational convenience, productions with the same nonterminal as the head can have their bodies grouped by symbol |.

Here is our grammar for C-like expressions:

exprexpr + term | expr - term | term
termterm * factor | term / factor | factor
factor → ( expr ) | atom | call
callatom ( list )
listlist , expr | expr
atom → [a-z0-9]+

Here expr, term, factor, ... a nonterminals and +, -, *, /, (, ), ... a terminals, expr is a start symbol.

Usually compiler consist of several parts: lexer, parser, code generator. Lexer reads input stream, split it into tokens and pass them into parser. Parser uses grammar to build parse tree and generate abstract syntax tree (AST for short). AST resemble parse tree to an extent. However, in AST interior nodes represent programming constructs while in the parse tree, the interior nodes represent nonterminals. Code generator traverses AST and generates code.

But we a going to simplify our compiler and combine parse phase with code generation phase. To accomplish this we will use syntax-directed definitions (semantic actions) for translating expressions into postfix notation.

exprexpr + term {puts("+")}
exprexpr + term {puts("-")}
atom → [a-z0-9]+ {puts(atom)}
...

Expression 9-5+2 will be translating into 95-2+ with this semantic actions.

For parsing we will use simple form of recursive-descent parsing, called predictive parsing, in which the lookahead symbol unambiguously determines the flow of control through the procedure body for each nonterminal. It is possible for a recursive-descent parser to loop forever. A problem arises with left-recursive productions like exprexpr + term.

A left-recursive production can be eliminated by rewriting the offending production:

A → _A_⍺ | β

Into right-recursive production:

A → β_R_
R → ⍺_R_ | ∅

Using this technique we can transform our grammer into new grammar:

exprterm exprR
exprR → + term exprR | - term exprR | ∅
termfactor termR
termR → * factor termR | / factor termR | ∅
factor → ( expr ) | atom | call
callatom ( list )
listexpr listR
listR → , expr listR | ∅
atom → [a-z0-9]+

Semantic actions embedded in the production are simply carried along in the transformation, as if they were terminals.

Okay, let's write some code. We need only one function compile(input: string) which will do all the work.

compile = (input) -> 

First start with lexer what will be splitting input sequence of characters into tokens.
For example next input string pow(1, 2 + 3) will be transformed into array ['pow', '(', '1', ',', '2', '+', '3', ')'].

  tokens = []
  

Every of our tokens will be one character, only atoms can be any length of letters or numbers.

  is_atom = /[a-z0-9]/i

Iterate throw characters of input. Note what inside this loop may be another loop which also increments i value.

  i = 0
  while i < input.length 
    switch char = input[i]

When meet one of next character, put it as token and continue to next character.

      when "+", "-", "*", "/", "(", ")", ","
        tokens.push char
        i++

Skip whitespaces.

      when " "
        i++

If character is unknown,

      else

Loop through each character in sequence until we encounter a character that is not an atom.

        if is_atom.test char
          tokens.push do ->
            value = ''
            while char and is_atom.test char
              value += char
              char = input[++i]
            value

Throw error on an unknown character.

        else
          throw "Unknown input char: #{char}"

We need a function which will be giving us a token from a tokens stream. Let's call it next.

  next = -> tokens.shift()

Lookahead is very important part of our compiler. It's allowed as determine what kind of production we hitting next.

  lookahead = next()

Another important function which can match given terminal with lookahead terminal and move lookahead to next token.

  match = (terminal) ->
    if lookahead == terminal then lookahead = next()
    else throw "Syntax error: Expected token #{terminal}, got #{lookahead}"

Sometimes we a going to hit into wrong production, and we need a function which allows us to return to previous state.

  recover = (token) ->
    tokens.unshift lookahead
    lookahead = token

Next, we a going to write our production rules. Each nonterminal represents corresponding function call, each terminal represents match function call. Also, we omitted ∅ production.
expr function represents next production rule:
exprterm exprR

  expr = ->
    term(); exprR()

Will be using lookahead for determine which production to use. Here also our first semantic actions which puts + or - onto stack. Ensure preserve ordering of semantic actions.
exprR → + term {puts("+")} exprR | - term {puts("-")} exprR | ∅

  exprR = ->
    if lookahead == "+"
      match("+"); term(); puts("+"); exprR()
    else if lookahead == "-"
      match("-"); term(); puts("-"); exprR()

termfactor termR

  term = ->
    factor(); termR()

termR → * factor {puts("*")} termR | / factor {puts("/")} termR | ∅

  termR = -> 
    if lookahead == "*"
      match("*"); factor(); puts("*"); termR()
    else if lookahead == "/"
      match("/"); factor(); puts("/"); termR()

Next goes tricky production rule. First, we look ahead if there (, which will mean what current we at expression in brackets. Second try use production rule for function call. Third if function call production fails, consider current token as an atom.
factor → ( expr ) | atom | call

  factor = ->
    if lookahead == "("
      match("("); expr(); match(")")
    else 
      atom() unless call() 

Call production should allow recovering if we chose it incorrectly.
Remember current token, and move to next token with match. If after goes an (, when we choose call production correctly, otherwise recover to previous state. Then we hitting list of arguments, puts special mark onto stack, this allows as know how many arguments contains in this list.
callatom ( list )

  call = ->
    token = lookahead
    match(lookahead)
    if lookahead == "("
      puts(false, "mark"); match("("); list(); match(")"); puts(token, "call")
      true
    else
      recover(token)
      false

listexpr listR

  list = ->
    expr(); listR();

listR → , expr listR | ∅

  listR = ->
    if lookahead == ","
      match(","); expr(); listR();

At the bottom lies atom production. If we went down to this production, we expecting to get an atom.
Otherwise, it's some syntax error.
atom → [a-z0-9]+

  atom = ->
    if is_atom.test lookahead
      match(puts(lookahead, "atom"))
    else
      throw "Syntax error: Unexpected token #{lookahead}"
      

In semantic rules, we use puts function, which records operators and atoms into stack in reverse polish notation.
But instead recording entire program in RPN, we are going to do code generation on the fly. For that, we must understand how to postfix algorithm works.

For example, we have a stream of tokens: 1, 2, 3, +, -

  1. Put 1 onto stack.
  2. Put 2 onto stack.
  3. Put 3 onto stack.
  4. When + pop two values (3,2) from stack and put (+ 2 3) onto stack.
  5. When - pop two values ((+ 2 3),1) from stack and put (- 1 (+ 2 3)) onto stack back.

Generated code will be on top of stack and stack size will be one, if stream was complete.

  stack = []
  puts = (token, type="op") ->
    switch type

Then operators comes in, pop two values from stack, generate code for that operator and push generated code back into stack.

      when "op"
        op = token
        y = stack.pop()
        x = stack.pop()
        stack.push "(#{op} #{x} #{y})"

Do same thing for call, but instead of gathering two values from stack, take all values, until false shows up from stack. false represents special mark to know their arguments ends.

      when "call"
        func = token
        args = []
        while arg = stack.pop()
          break unless arg
          args.unshift arg
        stack.push "(#{func} #{args.join(' ')})"

Any other atoms or marks push to stack as it appears.

      else 
        stack.push token

Return token from puts function for reuse.

    token

At this point, we described all function what we need, and now we can start parsing and compilation at same time.

  expr()

If parsing pass well, we end up with stack with only one item in it. It's our compiled code. Let's return it from the function.

  stack[0]

Expose the world.

module.exports = compile

You can try compiler online here.