rag-api
v1.3.1
Published
A simple TypeScript/Node.js package for building **RAG (Retrieval-Augmented Generation)** pipelines with [LangChain](https://js.langchain.com/), [Pinecone](https://www.pinecone.io/), and OpenAI models.
Readme
rag-api
A simple TypeScript/Node.js package for building RAG (Retrieval-Augmented Generation) pipelines with LangChain, Pinecone, and OpenAI models.
It provides two main functions:
createEmbeddings: preprocesses and stores book documents (PDF) into a Pinecone vector store.buildChat: creates a conversational retrieval QA chain with memory, LLM, and retriever to chat with the ingested book.
🚀 Installation
npm install rag-apior with Yarn:
yarn add rag-api📦 Exports
1. createEmbeddings(args: CreateEmbeddingsArgs)
Loads a PDF, splits it into chunks, and stores embeddings in Pinecone.
Arguments (CreateEmbeddingsArgs):
interface CreateEmbeddingsArgs {
bookId: number; // unique identifier for the book
bookPath: string; // local path to the PDF file
}Example:
import { createEmbeddings } from "rag-api";
await createEmbeddings({
bookId: 1,
bookPath: "./books/mybook.pdf",
});2. buildChat(args: ChatArgs)
Creates a conversational retrieval QA chain for querying the book.
Arguments (ChatArgs):
interface ChatArgs {
conversationId: number; // unique conversation/session ID
llmTemperature: number; // temperature for LLM responses
bookId: number; // ID of the book previously embedded
streaming: boolean; // enable/disable streaming responses
databaseUtils?: DatabaseUtils; // optional DB utils for persisting messages
}
interface DatabaseUtils {
createMessage: (message: IMessage, conversationId: number) => any;
getMessagesByConversationId: (conversationId: number) => Promise<(AIMessage | HumanMessage | SystemMessage)[]>;
}Example:
import { buildChat } from "rag-api";
const chatChain = buildChat({
conversationId: 42,
llmTemperature: 0.7,
bookId: 1,
streaming: false,
});
// later, you can use the chain to ask questions
const response = await chatChain.call({ question: "Summarize chapter 2" });
console.log(response);⚡ Requirements
- OpenAI API Key (used by
ChatOpenAI) - Pinecone API Key & Environment (used for storing embeddings)
Make sure you set these environment variables:
export OPENAI_API_KEY="your_openai_api_key"
export PINECONE_API_KEY="your_pinecone_api_key"
export PINECONE_ENVIRONMENT="your_environment"📖 Workflow
- Run
createEmbeddingswith a PDF to index it into Pinecone. - Use
buildChatwith the samebookIdto start a conversational QA over the indexed content. - Optionally provide
databaseUtilsto persist conversations in your own database.
🛠 Development
Clone the repo and install dependencies:
git clone https://github.com/yourusername/rag-api.git
cd rag-api
npm installBuild:
npm run build📄 License
MIT
