npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@intelagent/knowledge-grid

v0.1.1

Published

Structured context for AI agents — a layered knowledge grid with vector indexing, domain inference, intent-driven retrieval, and token-budgeted prompt composition. Better context, better agent performance. Zero dependencies.

Readme

@intelagent/knowledge-grid

Better context for AI agents. A layered knowledge grid that gives your agent structured, relevant context — so it performs better on every interaction. Zero dependencies.

npm install @intelagent/knowledge-grid

Why this exists

AI agents are only as good as the context they receive. Most RAG systems dump flat vector results into the prompt — no structure, no priority, no awareness of what kind of knowledge is being retrieved. The result is diluted context and inconsistent agent behaviour.

Knowledge Grid organises agent knowledge into a 4-layer lattice crossed with 9 domains, giving your agent structured context that distinguishes live state from learned experience from reference docs from system knowledge. The agent gets the right context, in the right priority, every time — and performs better because of it.

             sales  marketing  support  analytics  operations  social  general  platform  integrations
           ┌────────────────────────────────────────────────────────────────────────────────────────────┐
  system   │  Platform best practices, collective intelligence, curated seed knowledge                 │
  base     │  Documents, integration specs, API references, tool capabilities                          │
  experience│  Learned patterns, domain rules, memories, approval history                              │
  live     │  Current tasks, active workflows, dashboard state, recent events                          │
           └────────────────────────────────────────────────────────────────────────────────────────────┘

Each layer has a different weight during retrieval — live data ranks highest, system knowledge ranks lowest — so agents naturally prioritise actionable context over static reference material.

Quick start

import {
  initKnowledgeGrid,
  InMemoryStorageAdapter,
  InMemoryVectorAdapter,
  indexTask,
  indexKnowledgeDoc,
  searchGrid,
  composeGridContext,
  renderGridContext,
} from '@intelagent/knowledge-grid';

// 1. Initialise with adapters (in-memory for dev, bring your own for production)
initKnowledgeGrid({
  storage: new InMemoryStorageAdapter(),
  vector: new InMemoryVectorAdapter(),
});

// 2. Index some knowledge
await indexTask('agent-1', {
  id: 'task-1',
  title: 'Review Q1 sales pipeline',
  description: 'Audit all open deals and flag at-risk opportunities',
  status: 'in_progress',
  priority: 'high',
});

await indexKnowledgeDoc('agent-1', {
  id: 'doc-1',
  filename: 'sales-playbook.md',
  content: 'When a deal is at risk, schedule a check-in within 48 hours...',
});

// 3. Search with intent-driven retrieval
const results = await searchGrid({
  agentId: 'agent-1',
  query: 'which deals need attention?',
});

// 4. Compose token-budgeted prompt context
const context = composeGridContext(results.results, results.queryIntent);
const promptSection = renderGridContext(context);

// Inject `promptSection` into your LLM system prompt

Architecture

Layers (vertical axis)

| Layer | What lives here | Default weight | |-------|-----------------|----------------| | live | Current tasks, active workflows, recent events | 1.0 | | experience | Learned patterns, domain rules, memories | 0.85 | | base | Documents, integration specs, tool capabilities | 0.7 | | system | Platform best practices, curated seed knowledge | 0.5 |

Domains (horizontal axis)

9 built-in: sales, marketing, support, analytics, operations, social, general, platform, integrations

Custom domains are supported — pass any string as a domain and it participates in the same retrieval and classification system.

Retrieval pipeline

Query → Domain inference (keyword, no LLM) → Layer priority → Vector search → Rank by:
  finalScore = similarity × layerWeight × confidence × domainBoost
  1. classifyQueryIntent(query) — identifies relevant domains and layer priority from keywords
  2. searchGrid(options) — embeds query, searches the vector collection, ranks results
  3. composeGridContext(results, intent) — organises results into token-budgeted domain sections
  4. renderGridContext(context) — renders to markdown for prompt injection

Indexing pipeline

Each entity is:

  1. Converted to a text representation
  2. SHA-256 hashed for deduplication (unchanged content skips re-embedding)
  3. Stored via your StorageAdapter
  4. Embedded and stored via your VectorAdapter

Built-in indexers: indexTask, indexWorkflow, indexKnowledgeDoc, indexLearnedPattern, indexMemory, indexSystemBestPractice

Connection indexers: autoIngestMCPServer, autoIngestSDK

Adapters

The grid doesn't depend on any database or vector store. You provide two adapters:

StorageAdapter

Stores grid entry rows (the metadata, not the vectors).

interface StorageAdapter {
  findEntry(agentId: string, sourceType: string, sourceId: string): Promise<GridEntryRow | null>;
  createEntry(data: Omit<GridEntryRow, 'id' | 'access_count' | 'last_accessed' | 'created_at' | 'updated_at'>): Promise<GridEntryRow>;
  updateEntry(id: string, data: Partial<GridEntryRow>): Promise<GridEntryRow>;
  deleteEntry(id: string): Promise<void>;
  findEntriesByIds(ids: string[]): Promise<GridEntryRow[]>;
  findEntriesBySourceType(agentId: string, sourceType: string, options?: { limit?: number }): Promise<GridEntryRow[]>;
  incrementAccessCount(ids: string[]): Promise<void>;
  deleteEntriesByPrefix(agentId: string, sourceIdPrefix: string): Promise<void>;
}

VectorAdapter

Handles embedding generation and similarity search.

interface VectorAdapter {
  generateEmbedding(text: string): Promise<number[] | null>;
  isEmbeddingAvailable(): boolean;
  storeEmbedding(collection: string, embedding: number[], metadata: Record<string, unknown>): Promise<string>;
  deleteEmbeddings(collection: string, metadataFilter: Record<string, unknown>): Promise<void>;
  searchEmbeddings(options: {
    collection: string;
    queryVector: number[];
    topK: number;
    minScore: number;
    metadataFilters?: Record<string, unknown>;
  }): Promise<VectorSearchResult>;
}

Built-in: In-memory adapters

For development, testing, and prototyping:

import { InMemoryStorageAdapter, InMemoryVectorAdapter } from '@intelagent/knowledge-grid';

// Uses a deterministic hash-based embedding (not semantic — for testing only)
const storage = new InMemoryStorageAdapter();
const vector = new InMemoryVectorAdapter();

// With real embeddings (OpenAI, Cohere, local model, etc.)
const vector = new InMemoryVectorAdapter({
  embedFn: async (text) => {
    const response = await openai.embeddings.create({
      input: text,
      model: 'text-embedding-3-small',
    });
    return response.data[0].embedding;
  },
  dimensions: 1536,
});

Production adapter example (PostgreSQL + pgvector)

import { StorageAdapter, VectorAdapter } from '@intelagent/knowledge-grid';
import { PrismaClient } from '@prisma/client';

class PrismaStorageAdapter implements StorageAdapter {
  constructor(private prisma: PrismaClient) {}

  async findEntry(agentId: string, sourceType: string, sourceId: string) {
    return this.prisma.knowledge_grid_entries.findFirst({
      where: { agent_id: agentId, source_type: sourceType, source_id: sourceId },
    });
  }
  // ... implement remaining methods
}

System layer seed

New agents start with 10 curated best practices covering error recovery, API auth, approval thresholds, context prioritisation, and more:

import { seedSystemLayerForAgent } from '@intelagent/knowledge-grid';

const seeded = await seedSystemLayerForAgent('agent-1');
// seeded = 10 (first run), 0 (idempotent on repeat)

Connection auto-ingest

When you connect an MCP server or SDK, the grid automatically indexes each tool/method:

import { autoIngestMCPServer, autoIngestSDK } from '@intelagent/knowledge-grid';

await autoIngestMCPServer('agent-1', {
  id: 'server-1',
  name: 'GitHub',
  url: 'https://mcp.github.com',
  discoveredTools: [
    { name: 'create_issue', description: 'Create a GitHub issue', inputSchema: { properties: { title: { type: 'string' } } } },
  ],
});

This creates grid entries for each tool so they surface in retrieval when relevant — your agent doesn't need 150 tools loaded, just the 5-10 the grid identifies as relevant to the current query.

License

MIT