npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

text-to-mermaid

v0.0.7

Published

Convert natural language text to Mermaid.js syntax using rule-based algorithms or Large Language Models

Readme

text-to-mermaid: Natural Language to Mermaid.js Conversion Engine

Philosophy: "What You See Is What You Code" (WYSIWYC)

Executive Summary

This project implements a Hybrid Architecture designed to act as a strict "Conversion Engine". It solves the core challenge of translating natural language (NL) into structured Mermaid.js diagramming code.

Unlike purely generative approaches that might hallucinate nodes to "complete the thought", this engine focuses on strict translation of the input text.

How It Works

The system offers two modes of operation, selectable via configuration:

1. Deterministic Core (Rule-Based)

Default Mode

For simple, linear flows, we use Compromise, a lightweight NLP library.

  • Logic: It identifies key entities (nouns) in the text and links them sequentially.
  • Mechanism: Input Text -> Extract Nouns -> Linear Graph (Node A -> Node B -> ...)
  • Pros:
    • Extremely fast (<15ms).
    • 100% Offline.
    • Zero hallucination (only visualizes what's explicitly named).

2. Neural Mode (generative AI)

Enable with useAI: true

For complex logic, decision trees, or non-linear relationships, the system utilizes Large Language Models (LLMs).

  • Supported Backends:
    • Google Gemini: Via @google/genai SDK.
    • Local LLMs: Via llama-server (or any OpenAI-compatible endpoint).
  • Constrained Decoding: Instead of free-text generation, we enforce a strict JSON Schema. The LLM MUST output a structured JSON object representing the node/edge constraints, which is then deterministically compiled to Mermaid syntax. This guarantees valid syntax.

Usage

import { textToMermaid } from "text-to-mermaid";

// 1. Deterministic (Fast, Linear)
const diagram = await textToMermaid("The server connects to the database.");
// Output: graph TB ... server --> database

// 2. Neural (Complex, AI-Powered)
const complexDiagram = await textToMermaid(
  "If user is admin, go to dashboard, else go to login.",
  {
    useAI: true,
    aiConfig: {
      apiKey: process.env.GEMINI_API_KEY, // or via env var
    },
  },
);

Running Local LLMs (llama.cpp)

You can use a local LLM instead of cloud APIs for privacy and zero cost.

  1. Install llama.cpp: https://github.com/ggml-org/llama.cpp
  2. Download a model (e.g., gemma-2-2b-it.Q4_K_M.gguf).
  3. Start the server:
    llama-server -hf ggml-org/gemma-3-1b-it-GGUF
  4. Use in your code:
    await textToMermaid("...", {
      useAI: true,
      aiConfig: {
        baseUrl: "http://localhost:8080", // Points to local server
      },
    });

Development

Install dependencies

npm install

start development server

npm run dev

Run tests

npm test

Build library

npm run build