npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

wikiparser-node

v1.36.0

Published

A Node.js parser for MediaWiki markup with AST

Readme

WikiParser-Node

npm version CodeQL CI jsDelivr hits (npm) Codacy Badge Istanbul coverage

Other Languages

Introduction

WikiParser-Node is an offline Wikitext parser developed by Bhsd for the Node.js environment. It can parse almost all wiki syntax and generate an Abstract Syntax Tree (AST) (Try it online). It also allows for easy querying and modification of the AST, and returns the modified wikitext.

Although WikiParser-Node is not originally designed to convert Wikitext to HTML, it provides a limited capability to do so. Here is a list of example HTML pages rendered using this package.

Other Versions

Mini (also known as WikiLint)

This version provides a CLI, but only retains the parsing functionality and linting functionality. The parsed AST cannot be modified. It powers the Wikitext LSP, which provides multiple language services for editors such as VS Code, Sublime Text, and Helix.

A list of available linting rules can be found here.

Browser-compatible

A browser-compatible version, which can be used for code highlighting or as a linting plugin in conjunction with editors such as CodeMirror and Monaco (Usage example). It has been integrated into the MediaWiki official CodeMirror extension since Release 1.45.

Installation

Node.js

Please install the corresponding version as needed (WikiParser-Node or WikiLint), for example:

npm i wikiparser-node

or

npm i wikilint

Browser

You can download the code via CDN, for example:

<script src="//cdn.jsdelivr.net/npm/wikiparser-node"></script>

or

<script src="//unpkg.com/wikiparser-node/bundle/bundle-lsp.min.js"></script>

For more browser extensions, please refer to the corresponding documentation.

Usage

CLI usage

For MediaWiki sites with the CodeMirror extension installed, such as different language editions of Wikipedia and other Wikimedia Foundation-hosted sites, you can use the following command to obtain the parser configuration:

npx getParserConfig <site> <script path> [user] [force]
# For example:
npx getParserConfig jawiki https://ja.wikipedia.org/w [email protected]

The generated configuration file will be saved in the config directory. You can then use the site name for Parser.config.

// For example:
Parser.config = 'jawiki';

API usage

Please refer to the Wiki.

Performance

A full database dump (*.xml.bz2) scan of English Wikipedia's ~19 million articles (parsing and linting) on a personal MacBook Air takes about 5 hours.

Known issues

Parser

  1. Memory leaks may occur in rare cases.
  2. Invalid page names with unicode characters are treated like valid ones (Example).
  3. Preformatted text with a leading space is only processed by Token.prototype.toHtml.
  4. BCP 47 language codes are not supported in language conversion (Example).

HTML conversion

Extension

  1. Many extensions are not supported, such as <indicator> and <ref>.
  2. & needs to be escaped in <syntaxhighlight> (Example).

Transclusion

  1. Some parser functions are not supported.
  2. New lines in {{localurl:}} are not handled correctly (Example).

Heading

  1. The table of contents (TOC) is not supported.

HTML tag

  1. Style sanitization is sometimes different (Example).
  2. Table fostered content from <table> HTML tags (Example).

Table

  1. <caption> elements are wrapped in <tbody> elements (Example).
  2. Unclosed HTML tags in the table fostered content (Example).
  3. <tr> elements should not be fostered (Example).

Link

  1. Link trail is not supported (Example).
  2. Block elements inside a link should break it into multiple links (Example).
  3. Invalid or missing images (Examples 1, 2).
  4. Link starting with ../ on a subpage (Example).

External link

  1. External images are not supported (Examples 1, 2).
  2. No percent-encoding in displayed free external links (Example).

Block element

  1. Incomplete <p> wrapping when there are block elements (e.g., <pre>, <div> or even closing tags).
  2. Mixed lists (Example).

Language conversion

  1. Automatic language conversion is not supported.
  2. Support for manual language conversion is minimal (Example).

Miscellaneous

  1. Illegal HTML entities (Example).