npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ragify

v1.0.15

Published

RAG Chat bot library

Readme

📌 Ragify

Ragify is a CLI tool that allows you to upload PDF documents and store their embeddings in a vector database (Pinecone or ChromaDB) for Retrieval-Augmented Generation (RAG) applications. It also provides a function to retrieve relevant responses from the vector database and pass them through an LLM to generate answers based on the uploaded document.


🚀 Features

✅ Supports Pinecone and ChromaDB as vector databases.
✅ Splits PDF documents into chunks using LangChain.
✅ Generates embeddings using OpenAI's text-embedding-3-large model.
✅ Stores embeddings in the selected vector database for efficient retrieval.


📦 Installation

Install Ragify using npm:

npm i ragify

🛠️ What This Library Provides

This package provides two key functions:

  • uploadFile(filePath): Uploads a PDF file, generates embeddings, and stores them in the selected vector database.
  • askQuestion(query): Retrieves relevant information from the stored embeddings and uses an LLM to generate a response.

Currently, Pinecone and ChromaDB are the supported vector databases.


🌎 Environment Variables

Before using the library, set up your .env file with the required credentials.

For Pinecone

DB_TYPE=pinecone
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_INDEX_NAME=your_index_name
PINECONE_ENV=your_pinecone_environment
OPENAI_API_KEY=your_open_ai_api_key
OPENAI_MODEL=your_desired_model by default the 'gpt-4' model will be used
OPENAI_EMBEDDING_MODEL=your_desired_model by default the text-embedding-3-large will be used

For ChromaDB

DB_TYPE=chroma
CHROMA_DB_URL=http://localhost:8000
COLLECTION_NAME=pdf_embeddings
OPENAI_API_KEY=your_open_ai_api_key
OPENAI_MODEL=your_desired_model by default the 'gpt-4' model will be used

🚀 Usage

To run the CLI tool:

node cli.js

Follow the prompts to select a database and provide the necessary details.

Alternatively, you can use the functions in your Node.js project:

import { uploadFile, askQuestion } from "ragify";

// Upload a PDF file
await uploadFile("./documents/example.pdf");

// Ask a question based on the document
const response = await askQuestion("What is the summary of the document?");
console.log(response);

📝 How It Works

1️⃣ User selects a vector database (Pinecone/ChromaDB).
2️⃣ User provides the necessary database details.
3️⃣ PDF file is loaded and split into chunks using LangChain.
4️⃣ Embeddings are generated using the OpenAI API.
5️⃣ Embeddings are stored in the selected vector database.
6️⃣ When a query is made, relevant embeddings are retrieved and passed through an LLM to generate a response.


🔍 Debugging Tips

If embeddings are not being stored correctly in Pinecone:

1️⃣ Check API Key

curl -X GET "https://api.pinecone.io/v1/whoami" -H "Api-Key: ${PINECONE_API_KEY}"

2️⃣ Check if Pinecone index exists

curl -X GET "https://controller.${PINECONE_ENV}.pinecone.io/databases" -H "Api-Key: ${PINECONE_API_KEY}"

3️⃣ Print Loaded Document Chunks

Modify uploadFile() to inspect document chunks:

console.log(allSplits[0]);

🤝 Contributing

Contributions are welcome! Feel free to submit issues and pull requests to improve this library.


new upgrades

LangChain Conversational QA Agent

A powerful conversational question-answering system built on LangGraph and LangChain that maintains conversation history and performs retrieval-augmented generation (RAG).

Features

  • Persistent Conversation History: Maintains context across multiple queries
  • Retrieval-Augmented Generation: Enhances responses with information from your data sources
  • Customizable LLM Integration: Supports OpenAI models by default with easy configuration
  • Stateful Execution: Preserves conversations between API calls
  • Express API Support: Ready to integrate with Express for web applications

Installation

npm install langchain-conversational-qa

Environment Setup

Create a .env file with your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4  # Optional, defaults to gpt-4

Quick Start

Basic Usage

import { ConversationalAgent } from 'langchain-conversational-qa';

// Create a conversational agent instance
const agent = new ConversationalAgent();

// Ask questions
const response = await agent.query("What is LangChain?");
console.log(response);

// Follow-up questions maintain context
const followUp = await agent.query("Can you give me an example?");
console.log(followUp);

Customizing the Agent

// Create an agent with custom options
const customAgent = new ConversationalAgent({
  model: "gpt-4-turbo",
  apiKey: "your-openai-api-key",
  minHistorySize: 15  // Keep at least 15 conversation turns
});

Express API Integration

import express from 'express';
import { ConversationalAgent } from 'langchain-conversational-qa';

const app = express();
const PORT = process.env.PORT || 3000;
const agent = new ConversationalAgent();

app.use(express.json());

app.post('/api/query', async (req, res) => {
  try {
    const { query } = req.body;
    const response = await agent.query(query);
    res.json({ answer: response });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: 'An error occurred' });
  }
});

app.post('/api/reset', async (req, res) => {
  try {
    await agent.resetConversation();
    res.json({ status: 'Conversation history reset' });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: 'Failed to reset conversation' });
  }
});

app.get('/api/history', async (req, res) => {
  try {
    const history = await agent.getConversationHistory();
    res.json({ history });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: 'Failed to retrieve conversation history' });
  }
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Maintaining Conversation History

The agent automatically maintains conversation history across multiple queries as long as you're using the same agent instance. This is particularly important in server environments where you need to create the agent once, outside of your request handlers.

// CORRECT: Create once at server startup
const agent = new ConversationalAgent();

// INCORRECT: Creating a new agent for each request will lose history
app.post('/api/query', async (req, res) => {
  const agent = new ConversationalAgent(); // Don't do this!
  // ...
});

API Reference

ConversationalAgent

Constructor

new ConversationalAgent(options?)

Options:

  • model: OpenAI model to use (default: "gpt-4" or value from OPENAI_MODEL env variable)
  • apiKey: OpenAI API key (default: value from OPENAI_API_KEY env variable)
  • minHistorySize: Minimum conversation turns to maintain (default: 10)

Methods

  • query(question: string): Process a question and return the answer
  • resetConversation(): Clear conversation history
  • getConversationHistory(): Get the current conversation history

createExecutor

Creates a stateful executor function that can be used for standalone processing.

const executor = createExecutor();
const result = await executor("What is LangChain?");

How It Works

The library uses LangGraph to define a processing workflow:

  1. initialize_history: Sets up the conversation history
  2. retrieve: Fetches relevant context from your data sources
  3. generate: Creates a response using the LLM, context, and conversation history

The conversation history is maintained between calls by storing it within the agent instance.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT

📜 License

This project is licensed under the MIT License.