npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

agentic-memory-vector

v1.0.3

Published

Zero-dependency, in-memory vector database for AI Agents. Supports Cosine Similarity and Persistence.

Downloads

395

Readme

🧠 Agentic Memory

A blazing-fast, zero-dependency, in-memory vector database for AI Agents.

Give your AI "long-term memory" without setting up heavy infrastructure like Pinecone, Weaviate, or pgvector. agentic-memory-vector uses pure linear algebra (Cosine Similarity) with Float32Array for maximum performance across Node.js, Edge runtimes, and browsers.

npm version License: MIT


✨ Features

  • 🚀 Zero Dependencies - Pure TypeScript, no external packages
  • Blazing Fast - Uses Float32Array for optimized vector operations
  • 🌐 Universal - Runs in Node.js, Vercel Edge, Cloudflare Workers, Deno, and browsers
  • 🔌 Model Agnostic - Works with OpenAI, Cohere, HuggingFace, or custom embeddings
  • 💾 Persistence - Simple JSON serialization for saving/loading memory
  • 🎯 Type-Safe - Full TypeScript support with comprehensive types
  • 🔍 Semantic Search - Find relevant memories using cosine similarity

📦 Installation

npm install agentic-memory-vector
yarn add agentic-memory-vector
pnpm add agentic-memory-vector

🚀 Quick Start

import { AgenticMemory } from "agentic-memory-vector";
import OpenAI from "openai";

const openai = new OpenAI();

// 1. Define your embedder (Dependency Injection)
const embedder = async (text: string) => {
  const response = await openai.embeddings.create({
    model: "text-embedding-3-small",
    input: text,
  });
  return response.data[0].embedding;
};

// 2. Initialize the memory store
const memory = new AgenticMemory(embedder);

// 3. Add memories with optional metadata
await memory.add("My favorite color is blue", {
  user: "alice",
  category: "preferences",
});
await memory.add("I have a dog named Rex", { user: "alice", category: "pets" });
await memory.add("I work as a software engineer", {
  user: "alice",
  category: "career",
});

// 4. Search semantically
const results = await memory.search("What pets do I have?");
console.log(results[0].content); // "I have a dog named Rex"
console.log(results[0].score); // 0.95 (similarity score)
console.log(results[0].metadata); // { user: "alice", category: "pets" }

📚 API Reference

AgenticMemory

Constructor

new AgenticMemory(embedder: EmbedderFunction)

Parameters:

  • embedder: An async function that converts text to a vector (number array)

Example:

const memory = new AgenticMemory(async (text) => {
  // Your embedding logic here
  return [0.1, 0.2, 0.3, ...]; // Returns number[]
});

add(content: string, metadata?: DocumentMetadata): Promise<string>

Add a new memory to the database.

Parameters:

  • content: The text content to store
  • metadata (optional): Additional data to attach (tags, timestamps, etc.)

Returns: Promise resolving to the unique ID of the stored memory

Example:

const id = await memory.add("User prefers dark mode", {
  category: "ui-preferences",
  timestamp: Date.now(),
});
console.log(id); // "550e8400-e29b-41d4-a716-446655440000"

search(query: string, topK?: number, minScore?: number): Promise<SearchResult[]>

Search for semantically similar memories.

Parameters:

  • query: The search query text
  • topK (default: 3): Maximum number of results to return
  • minScore (default: 0.7): Minimum similarity score (0.0 to 1.0)

Returns: Array of search results sorted by relevance

Example:

const results = await memory.search("food preferences", 5, 0.8);
results.forEach((result) => {
  console.log(`Score: ${result.score}`);
  console.log(`Content: ${result.content}`);
  console.log(`Metadata:`, result.metadata);
});

toJSON(): string

Serialize the entire memory store to JSON for persistence.

Returns: JSON string containing all memories

Example:

const jsonData = memory.toJSON();
await fs.writeFile("memory-backup.json", jsonData);

fromJSON(json: string): void

Restore memory from a JSON string.

Parameters:

  • json: JSON string from a previous toJSON() call

Example:

const jsonData = await fs.readFile("memory-backup.json", "utf-8");
memory.fromJSON(jsonData);

💡 Advanced Usage

Persistence Example

import { AgenticMemory } from "agentic-memory-vector";
import fs from "fs/promises";

// Save memory to disk
async function saveMemory(memory: AgenticMemory, filepath: string) {
  const data = memory.toJSON();
  await fs.writeFile(filepath, data, "utf-8");
}

// Load memory from disk
async function loadMemory(memory: AgenticMemory, filepath: string) {
  const data = await fs.readFile(filepath, "utf-8");
  memory.fromJSON(data);
}

// Usage
const memory = new AgenticMemory(embedder);
await memory.add("Important information");

await saveMemory(memory, "./memory.json");
// ... later ...
await loadMemory(memory, "./memory.json");

Custom Embeddings (Local Models)

You can use any embedding model, including local ones:

import { pipeline } from "@xenova/transformers";

// Load a local transformer model
const embedder = await pipeline(
  "feature-extraction",
  "Xenova/all-MiniLM-L6-v2",
);

const memory = new AgenticMemory(async (text) => {
  const output = await embedder(text, { pooling: "mean", normalize: true });
  return Array.from(output.data);
});

Multi-User Memory with Metadata Filtering

// Add memories for different users
await memory.add("Likes coffee", { userId: "user_1" });
await memory.add("Prefers tea", { userId: "user_2" });
await memory.add("Drinks water", { userId: "user_1" });

// Search and filter by user
const allResults = await memory.search("What does the user drink?", 10, 0.5);
const user1Results = allResults.filter((r) => r.metadata.userId === "user_1");

Retrieval-Augmented Generation (RAG)

import { AgenticMemory } from "agentic-memory-vector";
import OpenAI from "openai";

const openai = new OpenAI();
const memory = new AgenticMemory(async (text) => {
  const res = await openai.embeddings.create({
    model: "text-embedding-3-small",
    input: text,
  });
  return res.data[0].embedding;
});

// Index your knowledge base
await memory.add("The company was founded in 2020");
await memory.add("We have offices in New York and London");
await memory.add("Our main product is an AI assistant");

// RAG query
async function ragQuery(question: string) {
  // 1. Retrieve relevant context
  const context = await memory.search(question, 3);

  // 2. Build prompt with context
  const prompt = `
Context:
${context.map((c) => c.content).join("\n")}

Question: ${question}

Answer:`;

  // 3. Generate answer
  const completion = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [{ role: "user", content: prompt }],
  });

  return completion.choices[0].message.content;
}

const answer = await ragQuery("When was the company founded?");
console.log(answer); // "The company was founded in 2020"

🏗️ How It Works

agentic-memory-vector implements a simple but powerful vector search algorithm:

  1. Embedding Generation: Text is converted to high-dimensional vectors using your provided embedder
  2. Storage: Vectors are stored in memory using Float32Array for optimal performance
  3. Search: Cosine similarity is calculated between the query vector and all stored vectors
  4. Ranking: Results are sorted by similarity score and filtered by threshold

The library uses pure linear algebra with no external dependencies, making it:

  • Fast: Direct Float32Array operations
  • Portable: Runs anywhere JavaScript runs
  • Simple: No database setup required

🔬 TypeScript Types

interface DocumentMetadata {
  [key: string]: any;
}

interface MemoryItem {
  id: string;
  content: string;
  vector: Float32Array;
  metadata: DocumentMetadata;
}

interface SearchResult {
  id: string;
  content: string;
  metadata: DocumentMetadata;
  score: number; // Cosine similarity (0.0 to 1.0)
}

type EmbedderFunction = (text: string) => Promise<number[]>;

🤔 When to Use This

✅ Perfect For:

  • Prototyping AI agents with memory
  • Serverless/Edge deployments (Vercel, Cloudflare Workers)
  • Small to medium datasets (< 100k vectors)
  • Browser-based AI applications
  • Quick RAG implementations

❌ Not Ideal For:

  • Large-scale production (millions of vectors)
  • Persistent storage requirements (use with companion DB)
  • Distributed systems
  • Advanced filtering/hybrid search

For large-scale production, consider Pinecone, Weaviate, or Qdrant. But for 90% of AI agent use cases, this is all you need!


📄 License

MIT © Mohamed-3rafa


🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

🙏 Acknowledgments

Built with ❤️ for the AI agent community.

If you find this useful, please ⭐ star the repo!