npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@xcelsior/llm

v1.0.0

Published

LLM Integration Services for AWS Bedrock (Claude, Titan)

Readme

@xcelsior/llm

Comprehensive LLM Integration Services for AWS Bedrock and Upstash Vector, providing enterprise-grade abstractions for:

  • Text generation with Claude models (with chat memory support)
  • Text embeddings with Amazon Titan
  • Vector storage and similarity search with Upstash
  • Intelligent text splitting for document processing
  • Vector similarity operations

Features

Core Services

  • Bedrock Client: Centralized AWS Bedrock Runtime client management with singleton pattern
  • Embedding Service: Generate embeddings using Amazon Titan models with batch support
  • Text Generation Service: Generate text using Claude models with conversation history
  • Text Splitter: Split documents into optimized chunks for embedding
  • Upstash Vector Store: Store and query embeddings at scale using Upstash Vector
  • Vector Utilities: Cosine similarity, Euclidean distance, and other vector operations

Key Capabilities

  • Chat Memory: Support for conversation history in text generation
  • RAG (Retrieval Augmented Generation): Built-in methods for context-aware responses
  • Batch Processing: Efficient batch operations for embeddings and vector storage
  • Error Handling: Comprehensive error handling and logging
  • TypeScript First: Full type safety and IntelliSense support

Installation

pnpm add @xcelsior/llm

Usage

Embedding Generation

import { EmbeddingService } from '@xcelsior/llm';

const embeddingService = new EmbeddingService({ region: 'us-east-1' });

// Generate single embedding
const embedding = await embeddingService.generateEmbedding('Your text here');

// Generate batch embeddings
const results = await embeddingService.generateEmbeddings([
    'Text 1',
    'Text 2',
    'Text 3'
]);

Text Generation

import { TextGenerationService, type ConversationMessage } from '@xcelsior/llm';

const textService = new TextGenerationService({ region: 'us-east-1' });

// Simple text generation
const response = await textService.generateText({
    prompt: 'Write a haiku about coding',
    maxTokens: 1000,
    temperature: 0.7
});

// With conversation history
const history: ConversationMessage[] = [
    { role: 'user', content: 'What is TypeScript?' },
    { role: 'assistant', content: 'TypeScript is a typed superset of JavaScript...' }
];

const contextualResponse = await textService.generateWithHistory(
    'Can you explain interfaces?',
    history
);

// RAG with context
const ragResponse = await textService.generateWithContext(
    'What is the refund policy?',
    ['Context chunk 1', 'Context chunk 2']
);

// RAG with both context and conversation history
const fullResponse = await textService.generateWithContextAndHistory(
    'Can I get a refund for a digital product?',
    ['Refund policy context...'],
    history
);

Text Splitting

import { TextSplitter } from '@xcelsior/llm';

const splitter = new TextSplitter({
    chunkSize: 1000,
    chunkOverlap: 200
});

// Split text
const chunks = await splitter.splitText(longDocument);

// Split with metadata
const result = await splitter.splitTextWithMetadata(longDocument);
console.log(result.metadata); // { totalChunks, averageChunkSize, originalTextLength }

Upstash Vector Store

import { UpstashVectorStore, type VectorDocument } from '@xcelsior/llm';

const vectorStore = new UpstashVectorStore({
    url: process.env.UPSTASH_VECTOR_URL!,
    token: process.env.UPSTASH_VECTOR_TOKEN!
});

// Store a vector
const doc: VectorDocument = {
    id: 'doc-1-chunk-0',
    embedding: [0.1, 0.2, ...], // 1536-dim vector
    metadata: {
        content: 'This is the text content',
        documentId: 'doc-1',
        chunkIndex: 0
    }
};
await vectorStore.upsert(doc);

// Batch upsert
await vectorStore.upsertBatch([doc1, doc2, doc3]);

// Query similar vectors
const results = await vectorStore.query(queryEmbedding, 10);
results.forEach(result => {
    console.log(`Score: ${result.score}, Content: ${result.metadata?.content}`);
});

// Delete vectors
await vectorStore.delete('doc-1-chunk-0');
await vectorStore.deleteByDocumentId('doc-1'); // Delete all chunks for a document

Vector Similarity

import { 
    cosineSimilarity, 
    euclideanDistance,
    findTopKSimilar 
} from '@xcelsior/llm';

// Calculate similarity
const similarity = cosineSimilarity(embedding1, embedding2);
const distance = euclideanDistance(embedding1, embedding2);

// Find top K similar items
const items = [
    { embedding: [0.1, 0.2, ...], text: 'Item 1' },
    { embedding: [0.3, 0.4, ...], text: 'Item 2' }
];

const topResults = findTopKSimilar(queryEmbedding, items, 5);

Complete RAG Pipeline Example

import {
    EmbeddingService,
    TextSplitter,
    UpstashVectorStore,
    TextGenerationService
} from '@xcelsior/llm';

// 1. Initialize services
const embeddingService = new EmbeddingService();
const textSplitter = new TextSplitter({ chunkSize: 1000, chunkOverlap: 200 });
const vectorStore = new UpstashVectorStore({
    url: process.env.UPSTASH_VECTOR_URL!,
    token: process.env.UPSTASH_VECTOR_TOKEN!
});
const textService = new TextGenerationService();

// 2. Process and store documents
async function indexDocument(documentId: string, text: string) {
    // Split text into chunks
    const chunks = await textSplitter.splitText(text);
    
    // Generate embeddings
    const embeddingResults = await embeddingService.generateEmbeddings(chunks);
    
    // Store in vector database
    const vectorDocs = embeddingResults.map((result, i) => ({
        id: `${documentId}-chunk-${i}`,
        embedding: result.embedding,
        metadata: {
            content: result.text,
            documentId,
            chunkIndex: i
        }
    }));
    
    await vectorStore.upsertBatch(vectorDocs);
}

// 3. Query with RAG and conversation history
async function queryWithMemory(
    question: string,
    conversationHistory: ConversationMessage[]
) {
    // Generate query embedding
    const queryEmbedding = await embeddingService.generateEmbedding(question);
    
    // Find relevant chunks
    const vectorResults = await vectorStore.query(queryEmbedding, 5);
    const contextChunks = vectorResults.map(r => r.metadata?.content).filter(Boolean);
    
    // Generate answer with context and history
    const response = await textService.generateWithContextAndHistory(
        question,
        contextChunks,
        conversationHistory
    );
    
    return response.text;
}

Configuration

All services accept an optional configuration object:

{
    region?: string;           // AWS region (default: 'us-east-1')
    embeddingModel?: string;   // Titan model ID (default: 'amazon.titan-embed-text-v2:0')
    textModel?: string;        // Claude model ID (default: 'anthropic.claude-3-sonnet-20240229-v1:0')
}

Environment Variables

For Upstash Vector Store:

  • UPSTASH_VECTOR_URL: Your Upstash Vector REST URL
  • UPSTASH_VECTOR_TOKEN: Your Upstash Vector REST token

License

MIT