npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@claude-vector/core

v2.5.11

Published

Core vector search engine for code intelligence

Readme

@claude-vector/core

Core vector search engine for semantic code search. This package provides the fundamental building blocks for creating embeddings-based search systems.

Features

  • 🚀 High-performance vector similarity search
  • 💾 Built-in caching system
  • 🔧 Configurable chunk processing
  • 📁 Smart project analysis
  • 🎯 Multiple embedding model support
  • 🔄 Extensible architecture

Installation

npm install @claude-vector/core

Quick Start

import { VectorSearchEngine, createDefaultConfig } from '@claude-vector/core';

// Create search engine with default config
const config = createDefaultConfig();
const searchEngine = new VectorSearchEngine(config);

// Initialize and search
await searchEngine.initialize('./your-project');
const results = await searchEngine.search('function definition', { limit: 5 });

console.log(results);

Environment Setup

Set your OpenAI API key:

export OPENAI_API_KEY="sk-your-api-key-here"

Or create a .env file:

OPENAI_API_KEY=sk-your-api-key-here

Project Analysis

The ProjectAdapter helps analyze your project structure and generate appropriate configurations:

import { ProjectAdapter } from '@claude-vector/core';

const adapter = new ProjectAdapter('/path/to/project');

// Analyze project type and structure
const projectInfo = await adapter.analyzeProject();
// { type: 'nextjs', language: 'typescript', framework: 'next', ... }

// Get optimized configuration for your project
const config = await adapter.getConfig();

// Get all files matching the configuration
const files = await adapter.getFiles();

Configuration

Default Configuration

{
  search: {
    threshold: 0.7,      // Minimum similarity score (0-1)
    maxResults: 10,      // Maximum results to return
    includeMetadata: true
  },
  embeddings: {
    model: 'text-embedding-3-small',
    batchSize: 100,
    dimensions: 1536
  },
  chunks: {
    maxSize: 1000,       // Maximum tokens per chunk
    minSize: 100,        // Minimum tokens per chunk
    overlap: 200,        // Token overlap between chunks
    splitByParagraph: true,
    preserveCodeBlocks: true
  },
  cache: {
    enabled: true,
    ttl: 3600,          // Cache TTL in seconds
    compression: true
  }
}

Custom Configuration

Create a .claude-search.config.js in your project root:

export default {
  patterns: {
    include: ['src/**/*.{js,ts}', 'docs/**/*.md'],
    exclude: ['**/*.test.js', '**/__tests__/**']
  },
  chunks: {
    maxSize: 1500,
    overlap: 300
  },
  search: {
    threshold: 0.8
  }
};

API Reference

VectorSearchEngine

Constructor Options

  • openaiApiKey (string): OpenAI API key
  • embeddingModel (string): Model to use for embeddings
  • searchThreshold (number): Minimum similarity score (0-1)
  • maxResults (number): Maximum results to return
  • cacheEnabled (boolean): Enable/disable caching
  • cacheTTL (number): Cache time-to-live in seconds

Methods

loadIndex(embeddingsPath, chunksPath)

Load pre-computed embeddings and chunks from JSON files.

search(query, options)

Search for similar chunks using semantic similarity.

findRelated(chunkIndex, options)

Find chunks similar to a given chunk.

generateQueryEmbedding(query)

Generate embedding vector for a query string.

getStats()

Get index statistics including chunk count, token count, and size estimates.

ProjectAdapter

Methods

analyzeProject()

Analyze project structure and detect type, framework, and features.

getDefaultConfig()

Get default configuration based on project type.

loadCustomConfig()

Load custom configuration from project config files.

getConfig()

Get merged configuration (default + custom).

getFiles(config)

Get all files matching the include/exclude patterns.

Caching

The built-in cache system helps improve performance by storing search results:

import { SimpleCache } from '@claude-vector/core';

const cache = new SimpleCache('./cache', 3600); // 1 hour TTL

// Basic operations
await cache.set('key', { data: 'value' });
const value = await cache.get('key');
await cache.delete('key');

// Maintenance
await cache.cleanup(); // Remove expired entries
const stats = await cache.getStats(); // Get cache statistics

Advanced Usage

Custom Embedding Models

const engine = new VectorSearchEngine({
  embeddingModel: 'text-embedding-3-large',
  // Dimensions change based on model
  config: { embeddings: { dimensions: 3072 } }
});

Batch Processing

For large codebases, process embeddings in batches:

const config = {
  embeddings: {
    batchSize: 50, // Process 50 chunks at a time
    maxRetries: 3,
    retryDelay: 2000
  }
};

Type Definitions

TypeScript users can benefit from JSDoc type definitions:

import type { 
  SearchOptions, 
  SearchResult, 
  ProjectConfig 
} from '@claude-vector/core';

Performance Tips

  1. Pre-compute embeddings: Generate embeddings once and reuse them
  2. Enable caching: Cache search results for repeated queries
  3. Optimize chunk size: Balance between context and performance
  4. Use appropriate models: Smaller models for speed, larger for accuracy

License

MIT