npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

blink-query

v1.0.0

Published

DNS-inspired knowledge resolution layer for AI agents

Readme

Blink

DNS-inspired knowledge resolution layer for AI agents

Tests Node License npm

Blink sits between your data and your AI agent. It turns documents from anywhere — files, databases, web pages, git repos — into typed knowledge records with DNS-like resolution semantics.

Your Data → [Load → Store → Find] → Your Agent

Quick Start

Installation

npm install blink-query

# Optional: for PDF/DOCX support
npm install llamaindex @llamaindex/readers

# Optional: for PostgreSQL ingestion
npm install pg

Library API

import { Blink, extractiveSummarize } from 'blink-query';

const blink = new Blink();

// Ingest a directory
await blink.ingestDirectory('./docs', {
  summarize: extractiveSummarize(500),
  namespacePrefix: 'knowledge'
});

// Resolve knowledge
const response = blink.resolve('knowledge/readme');
console.log(response.record.summary);

// Query with DSL
const results = blink.query('knowledge where type = "SUMMARY" limit 5');

// Search
const found = blink.search('authentication jwt');

blink.close();

CLI

# Ingest files
blink ingest ./my-docs --prefix knowledge --tags "v1,docs"

# Resolve a path
blink resolve knowledge/readme

# Search
blink search "authentication api"

# Query with DSL
blink query 'knowledge where tags contains "urgent" order by hit_count desc'

# List records in a namespace
blink list knowledge --limit 20 --offset 0

# Manage zones
blink zones

# Move and delete
blink move knowledge/old-path knowledge/new-path
blink delete knowledge/outdated-doc

All CLI commands support --json for machine-readable output and --db to target a specific database file.

MCP Server (for AI agents)

blink serve
# AI agent connects via stdio MCP protocol

9 tools available: blink_resolve, blink_save, blink_search, blink_list, blink_query, blink_get, blink_delete, blink_move, blink_zones.

In-Memory Mode (for testing)

const blink = new Blink({ dbPath: ':memory:' });

The 5 Record Types

| Type | What it tells the agent | Example | |------|------------------------|---------| | SUMMARY | Read this directly, you have what you need | Project overview, meeting notes | | META | Structured data, parse it | { status: "active", contributors: 12 } | | COLLECTION | Browse children, pick what's relevant | Table of contents, directory listings | | SOURCE | Summary here, fetch source if you need depth | Large files, external APIs | | ALIAS | Follow the redirect to the real record | Shortcuts, renames |

Core innovation: Type carries consumption instruction, content carries domain semantics.


Data Sources

Directory Ingestion

await blink.ingestDirectory('./docs', {
  summarize: extractiveSummarize(500),
  namespacePrefix: 'docs',
  maxFileSize: 1048576,    // 1MB limit (default)
  includeHidden: false,     // skip dotfiles (default)
  onProgress: ({ current, total, file }) => {
    console.log(`${current}/${total}: ${file}`);
  }
});

Supports 50+ file extensions out of the box. Skips empty files, hidden files, and files over the size limit automatically.

PostgreSQL Ingestion

await blink.ingestFromPostgres({
  connectionString: 'postgresql://localhost/mydb',
  query: 'SELECT id, title, body FROM articles',
  textColumn: 'body',
  idColumn: 'id',
  titleColumn: 'title'
});

Web Ingestion

import { loadFromWeb } from 'blink-query';

const docs = await loadFromWeb([
  'https://example.com/docs/getting-started',
  'https://example.com/docs/api-reference'
]);
await blink.ingest(docs, { namespacePrefix: 'web' });

Git Ingestion

import { loadFromGit } from 'blink-query';

const docs = await loadFromGit({
  repoPath: '/path/to/repo',
  ref: 'main',
  extensions: ['.md', '.ts']
});
await blink.ingest(docs, { namespacePrefix: 'repo' });

LLM-Powered Summarization

import { Blink, configureLLM } from 'blink-query';

// Configure via environment variables:
// BLINK_LLM_PROVIDER=openai
// BLINK_LLM_MODEL=gpt-4o-mini
// OPENAI_API_KEY=...

const summarize = configureLLM();

await blink.ingestDirectory('./docs', {
  summarize,
  namespacePrefix: 'knowledge'
});

Or bring your own summarizer:

await blink.ingest(docs, {
  summarize: async (text, metadata) => {
    // Call any LLM, return a string
    return await myLLM.summarize(text);
  }
});

Query DSL

SQL-like query language for filtering records:

namespace where field op value [and|or condition] [order by field asc|desc] [limit N] [offset N] [since "date"]

Examples

# Filter by type
blink query 'docs where type = "SUMMARY"'

# Tag search
blink query 'projects where tags contains "urgent" order by hit_count desc'

# Boolean logic
blink query 'docs where type = "SOURCE" and hit_count > 10'

# NOT operator
blink query 'docs where not type = "ALIAS"'

# IN operator
blink query 'docs where type in ("SUMMARY", "META")'

# Pagination
blink query 'docs where type = "SUMMARY" limit 10 offset 20'

# Date filtering
blink query 'docs since "2026-01-01"'

Resolution

const response = blink.resolve('projects/orpheus/readme');

switch (response.status) {
  case 'OK':        // Record found
  case 'STALE':     // Record found but TTL expired
  case 'NXDOMAIN':  // Not found
  case 'ALIAS_LOOP': // Circular alias detected
}

Resolution follows DNS semantics:

  • Direct path lookup
  • ALIAS chains (up to 5 hops)
  • Auto-COLLECTION: resolving a namespace generates a listing of child records
  • TTL expiry: records past their TTL return with STALE status

API Design

All CRUD operations are synchronous — no await needed:

| Method | Returns | Description | |--------|---------|-------------| | resolve(path) | { status, record } | DNS-like path resolution | | get(path) | record \| null | Direct lookup | | save(input) | record | Create or update | | delete(path) | boolean | Remove a record | | move(from, to) | record \| null | Move/rename | | search(query) | record[] | FTS5 keyword search | | list(namespace) | record[] | List records in namespace | | query(dsl) | record[] | Query DSL filtering |

Only ingestion methods (ingest, ingestDirectory, ingestFromPostgres) are async.

Error Handling

  • resolve() returns a status object — check status before using record
  • get() returns null if the path doesn't exist
  • delete() returns false if the record wasn't found
  • move() returns null if the source doesn't exist
  • query() throws on invalid query syntax
  • save() throws on invalid input (e.g., ALIAS without target)

Input Validation

All input is validated at the save boundary:

  • Namespaces: no path traversal (..), no special characters (#, ?, %)
  • Titles: non-empty, trimmed
  • Content: max 10MB
  • Tags: deduplicated, cleaned
  • Record types: must be one of the 5 valid types
  • PostgreSQL WHERE clauses: checked for injection patterns

Architecture

┌─────────────────────────────────────────────────────────────┐
│                      Blink System                           │
├─────────────┬───────────────┬──────────────┬───────────────┤
│  Ingestion  │    Storage    │  Resolution  │  Consumption  │
│             │               │              │               │
│ Directory   │  SQLite DB    │  Resolver    │   Library     │
│ PostgreSQL  │  FTS5 Search  │  Query DSL   │   CLI         │
│ Web / Git   │  Transactions │  Auto-COLL   │   MCP Server  │
│ LLM Summary│  Zones        │  ALIAS chain │   JSON output │
└─────────────┴───────────────┴──────────────┴───────────────┘

See docs/ARCHITECTURE.md for a full plain-language guide.


Development

# Install dependencies
npm install

# Build (parser + library + CLI)
npm run build

# Run tests (excludes integration tests)
npm test

# Run all tests (including PostgreSQL integration)
npm run test:all

# Build PEG parser only
npm run build:parser

# Pack for inspection before publishing
npm pack --dry-run

# CLI (dev mode)
node dist/index.js --help

Use Cases

  • Agent memory — Store conversation context with semantic types
  • Project knowledge base — Ingest codebases, docs, wikis
  • API caching — Cache API responses with TTL
  • Research notes — Structure knowledge with namespaces
  • Configuration — Store settings as META records

License

MIT — see LICENSE


Questions? Open an issue or read the architecture docs.