npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

congraphdb

v0.1.11

Published

A high-performance, embedded graph database for Node.js built with Rust

Readme

CongraphDB

"SQLite for Graphs" — A high-performance, embedded graph database for Node.js built with Rust

npm version Node.js Version License Documentation Benchmark SDK

CongraphDB is an embedded, serverless graph database designed for local-first applications. Built with Rust for memory safety and extreme performance, it provides a native Node.js bindings layer via napi-rs.

⚠️ Early Stage Development: This project is in active development (v0.1.x). While the core architecture is in place, some features are still being implemented. See the Roadmap for planned features.

📚 New users: We recommend starting with congraphdb-sdk for examples and best practices.

Features

  • 🚀 Embedded & Serverless — No separate database process. Store data locally in a single .cgraph file.
  • High Performance — Rust-powered with memory-mapped I/O, columnar storage, and vectorized execution.
  • 🔍 Dual Query InterfaceCypher graph query language OR JavaScript-native API for flexible development.
  • 🤖 AI-Ready — Built-in HNSW index for vector similarity search on embeddings.
  • 📦 Easy Distribution — Prebuilt binaries for Windows, macOS, and Linux via npm.
  • 💾 ACID Transactions — Serializable transactions with write-ahead logging.
  • 🔒 Memory Safe — Built with Rust — no segfaults, no memory leaks.
  • 🛤️ Path Finding — BFS-based shortestPath() and allShortestPaths() functions for graph traversal.
  • 📄 Document API — createChunk(), createEntity(), createFact() methods for RAG workflows.
  • 🔐 Lock Manager — Deadlock prevention with timeout-based lock coordination.
  • 📊 SQL DDL Support — CREATE TABLE and INSERT INTO syntax alongside Cypher.

Performance

Last Run: April 10, 2026 | View congraphdb-benchmark for detailed results.

CongraphDB ranks #1 across all dataset sizes in comprehensive benchmarks:

Small Dataset (10K nodes, 50K edges)

| Rank | Engine | Score | Ingestion | Traversal | PageRank | Memory | |:----:|:-------------|:------:|:---------:|:---------:|:--------:|:------:| | 🥇 | CongraphDB | 84.8 | 114K/s | 🥇 0.7ms | 🥇 0.0s | 128MB | | 🥈 | SQLite | 47.6 | 17K/s | 1.3ms | 1.6s | 142MB | | 🥉 | LevelGraph | 43.2 | 4K/s | 1.6ms | 🥇 0.0s | 336MB | | 4 | Graphology | 41.9 | 21K/s | 2.3ms | 🥇 0.1s | 354MB |

Medium Dataset (100K nodes, 500K edges)

| Rank | Engine | Score | Ingestion | Traversal | PageRank | Memory | |:----:|:-------------|:------:|:---------:|:---------:|:--------:|:------:| | 🥇 | CongraphDB | 100.0 | 🥇 118K/s | 🥇 0.5ms | 🥇 14.5s | 🥇 385MB | | 🥈 | Kuzu | 70.0 | 85K/s | 🥇 0.7ms | 32.0s | 720MB | | 🥉 | Neo4j | 68.4 | 92K/s | 🥇 0.7ms | 24.5s | 1850MB | | 4 | Graphology | 57.8 | 72K/s | 🥇 0.6ms | 78.0s | 980MB | | 5 | SQLite | 30.3 | 42K/s | 6.1ms | 68.0s | 680MB |

Large Dataset (1M nodes, 5M edges)

| Rank | Engine | Score | Ingestion | Traversal | PageRank | Memory | |:----:|:-------------|:------:|:---------:|:---------:|:--------:|:------:| | 🥇 | CongraphDB | 100.0 | 🥇 110K/s | 🥇 0.6ms | 🥇 168.0s | 🥇 3250MB | | 🥈 | Neo4j | 72.7 | 88K/s | 🥇 0.8ms | 285.0s | 8200MB | | 🥉 | Kuzu | 70.9 | 82K/s | 🥇 0.8ms | 380.0s | 5800MB | | 4 | Graphology | 57.7 | 68K/s | 🥇 0.8ms | 920.0s | 8500MB | | 5 | SQLite | 28.6 | 38K/s | 13.0ms | 850.0s | 5200MB |

CongraphDB is optimized for:

  • Local-first applications — No network overhead
  • Multi-core systems — Parallel query execution using Rayon
  • Analytical workloads — Columnar storage for fast aggregations
  • Vector similarity — HNSW algorithm for ANN search with O(log n) complexity

Running Benchmarks

To reproduce or update benchmarks, use the congraphdb-benchmark package:

# Clone benchmark repo
git clone https://github.com/congraph-ai/congraphdb-benchmark.git
cd congraphdb-benchmark
npm install

Workflow 1: Just Run Benchmarks

npm run benchmark          # or benchmark:small, benchmark:medium, etc.

This runs benchmarks and saves to results/benchmark-*.json

Workflow 2: Update Website Data (All-in-One)

npm run update              # or update:small, update:medium, update:large

This script runs benchmarks internally, then converts results to website format and saves to data/latest.json

Workflow 3: Update Documentation

npm run docs:build         # or docs:serve for live preview

This generates Markdown from data/ files, copies to docs/data/, and builds the static site

Simplest Full Workflow

npm run update             # Runs benchmarks + updates data/ files
npm run docs:build         # Generates docs + builds site

Note: Don't run benchmark before update — the update script runs benchmarks internally. Running both would run benchmarks twice.


Quick Start

Choose your query interface:

Option 1: Cypher Query Language (Industry Standard)

npm install congraphdb
const { Database } = require('congraphdb')

// Create or open a database
const db = new Database('./my-graph.cgraph')
db.init()

// Create a connection
const conn = db.createConnection()

// Define schema
await conn.query(`
  CREATE NODE TABLE User(name STRING, age INT64, PRIMARY KEY (name))
`)

await conn.query(`
  CREATE REL TABLE Knows(FROM User TO User, since INT64)
`)

// Insert data
await conn.query(`
  CREATE (alice:User {name: 'Alice', age: 30})
         -[:Knows {since: 2020}]->
         (bob:User {name: 'Bob', age: 25})
`)

// Query
const result = await conn.query(`
  MATCH (u:User)-[k:Knows]->(f:User)
  WHERE u.name = 'Alice'
  RETURN u.name, k.since, f.name
`)

// Get all results
const rows = result.getAll()
for (const row of rows) {
  console.log(row)
}

db.close()

Option 2: JavaScript-Native API (Developer Friendly)

npm install congraphdb
const { Database, CongraphDBAPI } = require('congraphdb')

// Initialize
const db = new Database('./my-graph.cgraph')
await db.init()
const api = new CongraphDBAPI(db)

// Create nodes
const alice = await api.createNode('User', { name: 'Alice', age: 30 })
const bob = await api.createNode('User', { name: 'Bob', age: 25 })

// Create relationships
await api.createEdge(alice._id, 'KNOWS', bob._id, { since: 2020 })

// Query with pattern matching
const friends = await api.find({
  subject: alice._id,
  predicate: 'KNOWS',
  object: api.v('friend'),
})

// Fluent traversal API (LevelGraph-compatible)
const friendsOfFriends = await api.nav(alice._id).out('KNOWS').out('KNOWS').values()

// Cleanup
await api.close()
await db.close()

💡 For more examples, check out congraphdb-sdk — a comprehensive SDK with sample applications demonstrating various CongraphDB features and usage patterns.


Query Languages

CongraphDB supports two query interfaces for maximum flexibility:

| Interface | Best For | Style | | ------------------ | ------------------------------------------------------- | ---------------------------------------------- | | Cypher | Complex queries, graph analytics, power users | Industry-standard graph query language | | JavaScript API | Simple CRUD, application integration, rapid development | Native JavaScript methods with fluent chaining |

When to Use Each

Use Cypher when:

  • Writing complex graph traversals
  • Using path finding algorithms
  • Performing aggregations and analytics
  • Migrating from Neo4j or other Cypher databases

Use JavaScript API when:

  • Building application-specific CRUD operations
  • Prefer programmatic interfaces over query strings
  • Want IDE autocomplete and type safety
  • Building simple node/edge operations
  • Need LevelGraph-compatible API

Cypher Support

CongraphDB supports the Cypher graph query language with the following capabilities:

  • Schema Definition — Node tables with primary keys, relationship tables
  • Pattern Matching — MATCH clauses with variable-length paths (*1..3)
  • Property Filters — Filters in patterns like (u:User {name: "Alice"})
  • Path FindingshortestPath() and allShortestPaths() functions
  • Pattern Comprehensions — List extraction from graph patterns
  • Temporal Types — Date, DateTime, Duration, and timestamp functions
  • Advanced Features — Regex matching (=~), map literals, multi-label nodes
  • Vector Search — HNSW index for similarity search on embeddings
  • DML Operations — CREATE with properties, SET, DELETE, REMOVE, MERGE with ON MATCH/ON CREATE
  • CASE Expressions — Full conditional logic with WHEN/THEN/ELSE
  • Query Result Modifiers — ORDER BY with ASC/DESC, SKIP, LIMIT clauses
  • Variable-Length Paths[*..n] syntax for flexible path patterns
  • Union Operator — Combine results from multiple pattern combinations

Full Documentation: Cypher Reference


JavaScript API (v0.1.5+)

CongraphDB provides a JavaScript-native API layer as an alternative to Cypher for developers who prefer a programmatic interface.

Core Classes

  • CongraphDBAPI — Main API with node/edge operations, pattern matching, transactions
  • Navigator — Fluent graph traversal API (LevelDB-compatible)
  • Schema API — JavaScript-native schema management (v0.1.8+)
const { Database, CongraphDBAPI } = require('congraphdb')

const db = new Database('./my-graph.cgraph')
await db.init()
const api = new CongraphDBAPI(db)

// Node operations
const node = await api.createNode('User', { name: 'Alice', age: 30 })
const user = await api.getNode(node._id)

// Edge operations
await api.createEdge(alice._id, 'KNOWS', bob._id, { since: 2020 })

// Pattern matching
const friends = await api.find({
  subject: alice._id,
  predicate: 'KNOWS',
  object: api.v('friend'),
})

// Fluent traversal
const fof = await api.nav(alice._id).out('KNOWS').out('KNOWS').values()

// Schema management (v0.1.8+)
await api.schema.createNodeTable('User', {
  properties: { id: 'string', name: 'string', age: 'int64' },
  primaryKey: 'id',
})

await api.close()
await db.close()

Full Documentation: JavaScript API Reference


Architecture

CongraphDB is built with a modern, layered architecture designed for performance and safety:

  • Rust Core Engine — Memory safety guarantees, zero-cost abstractions, LLVM optimizations
  • napi-rs Bindings — Pre-built native binaries, minimal overhead, async worker pool
  • Columnar Storage — Similar to KuzuDB/DuckDB, efficient compression, vectorized execution
  • Hybrid Query Processing — Cypher and JavaScript API both execute through the same optimized engine

| Layer | Technology | Purpose | | ------------ | ----------------- | ----------------------------------- | | Bindings | napi-rs + Node.js | FFI bridge, async execution | | Storage | memory-map2 | Zero-copy file I/O | | Concurrency | Rayon | Parallel query execution | | Vector Index | HNSW lib | Approximate nearest neighbor search | | Compression | Zstd | Column compression | | Parsing | Logos | Cypher tokenization |


Building from Source

Prerequisites

  • Rust 1.70 or later
  • Node.js 20 or later
  • CMake (for building on some platforms)

Build Steps

# Clone the repository
git clone https://github.com/congraph-ai/congraphdb.git
cd congraphdb

# Install dependencies
npm install

# Build the native module
npm run build

# Run tests
npm test

Status

CongraphDB is currently in alpha development (v0.1.11). The core storage engine and transaction system are implemented, with comprehensive Cypher query support including DML operations, path finding, pattern comprehensions, temporal types, query statistics, query result modifiers (ORDER BY, SKIP, LIMIT), variable-length path traversal, union operator, SQL DDL syntax, a Document API for RAG workflows, and a JavaScript-native API for simple CRUD operations.

Test Coverage: 1,000+ tests passing

Query Interfaces: Cypher Query Language + JavaScript API (CongraphDBAPI)

Run npm test to see the current test coverage.


Roadmap

  • [x] Cypher query support - Core operators, expressions, functions, ORDER BY, SKIP/LIMIT, DISTINCT
  • [x] JavaScript API - NodeAPI, EdgeAPI, Pattern matching, Navigator API
  • [x] Path finding functions - shortestPath(), allShortestPaths() with BFS-based algorithms
  • [x] Pattern comprehensions - Single-node and relationship patterns with outer variable scope
  • [x] Temporal types - Date, DateTime, Duration support with temporal functions
  • [x] Multi-label nodes - Nodes with multiple labels support
  • [x] Regex matching - Pattern matching with =~ operator
  • [x] Map literals - Map literal expressions in queries
  • [x] DML operations - CREATE with properties, SET, DELETE, REMOVE, MERGE with ON MATCH/ON CREATE
  • [x] Property filter handling - Property filters in MATCH patterns now work correctly
  • [x] Dynamic property creation - Auto-create columns when setting non-existent properties
  • [x] CASE expressions - Full conditional logic support in queries
  • [x] Query execution statistics - Track query performance metrics
  • [x] Query result modifiers - ORDER BY, SKIP, LIMIT clauses for result control
  • [x] Variable-length path traversal - [*..n] syntax for path patterns
  • [x] Union operator - Combine results from multiple pattern combinations
  • [x] Case-insensitive keywords - All Cypher keywords now case-insensitive
  • [x] Schema API - JavaScript-native schema management (v0.1.8+)
  • [x] Document API - RAG workflow support (v0.1.10+)
  • [x] Lock Manager - Deadlock prevention (v0.1.10+)
  • [x] SQL DDL Syntax - CREATE TABLE/INSERT INTO (v0.1.10+)
  • [x] Query Parameters - $placeholder substitution (v0.1.10+)
  • [ ] OPTIONAL MATCH with variable-length paths
  • [ ] Async iteration support for QueryResult
  • [x] Graph analytics algorithms - PageRank, Louvain, Leiden, Label Propagation, Spectral Clustering, Infomap, SLPA (v0.1.11+)
  • [x] Transaction control - BEGIN and COMMIT statements (v0.1.11+)
  • [x] WAL recovery - Data durability and crash recovery (v0.1.11+)
  • [ ] Distributed queries
  • [ ] GraphQL endpoint
  • [ ] WebAssembly support
  • [ ] Additional index types (B-tree, hash, etc.)
  • [ ] Query optimization and execution plan visualization
  • [ ] Backup and restore utilities

Resources

  • Documentation — Official API and feature documentation
  • Benchmark — Performance benchmarks and comparisons
  • SDK — Comprehensive SDK and usage patterns
  • Demo — Interactive movie database with D3.js visualization
  • CHANGELOG.md — Version history and release notes
  • CONTRIBUTING.md — Contribution guidelines

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

MIT License — see LICENSE file for details.

Acknowledgments

CongraphDB is inspired by:

  • KuzuDB — For its columnar storage architecture
  • SQLite — For the embedded database philosophy
  • Apache AGE — For Cypher language support

CongraphDB — The embedded graph database for the edge.