ragify
v1.0.15
Published
RAG Chat bot library
Maintainers
Readme
📌 Ragify
Ragify is a CLI tool that allows you to upload PDF documents and store their embeddings in a vector database (Pinecone or ChromaDB) for Retrieval-Augmented Generation (RAG) applications. It also provides a function to retrieve relevant responses from the vector database and pass them through an LLM to generate answers based on the uploaded document.
🚀 Features
✅ Supports Pinecone and ChromaDB as vector databases.
✅ Splits PDF documents into chunks using LangChain.
✅ Generates embeddings using OpenAI's text-embedding-3-large model.
✅ Stores embeddings in the selected vector database for efficient retrieval.
📦 Installation
Install Ragify using npm:
npm i ragify🛠️ What This Library Provides
This package provides two key functions:
uploadFile(filePath): Uploads a PDF file, generates embeddings, and stores them in the selected vector database.askQuestion(query): Retrieves relevant information from the stored embeddings and uses an LLM to generate a response.
Currently, Pinecone and ChromaDB are the supported vector databases.
🌎 Environment Variables
Before using the library, set up your .env file with the required credentials.
For Pinecone
DB_TYPE=pinecone
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_INDEX_NAME=your_index_name
PINECONE_ENV=your_pinecone_environment
OPENAI_API_KEY=your_open_ai_api_key
OPENAI_MODEL=your_desired_model by default the 'gpt-4' model will be used
OPENAI_EMBEDDING_MODEL=your_desired_model by default the text-embedding-3-large will be usedFor ChromaDB
DB_TYPE=chroma
CHROMA_DB_URL=http://localhost:8000
COLLECTION_NAME=pdf_embeddings
OPENAI_API_KEY=your_open_ai_api_key
OPENAI_MODEL=your_desired_model by default the 'gpt-4' model will be used🚀 Usage
To run the CLI tool:
node cli.jsFollow the prompts to select a database and provide the necessary details.
Alternatively, you can use the functions in your Node.js project:
import { uploadFile, askQuestion } from "ragify";
// Upload a PDF file
await uploadFile("./documents/example.pdf");
// Ask a question based on the document
const response = await askQuestion("What is the summary of the document?");
console.log(response);📝 How It Works
1️⃣ User selects a vector database (Pinecone/ChromaDB).
2️⃣ User provides the necessary database details.
3️⃣ PDF file is loaded and split into chunks using LangChain.
4️⃣ Embeddings are generated using the OpenAI API.
5️⃣ Embeddings are stored in the selected vector database.
6️⃣ When a query is made, relevant embeddings are retrieved and passed through an LLM to generate a response.
🔍 Debugging Tips
If embeddings are not being stored correctly in Pinecone:
1️⃣ Check API Key
curl -X GET "https://api.pinecone.io/v1/whoami" -H "Api-Key: ${PINECONE_API_KEY}"2️⃣ Check if Pinecone index exists
curl -X GET "https://controller.${PINECONE_ENV}.pinecone.io/databases" -H "Api-Key: ${PINECONE_API_KEY}"3️⃣ Print Loaded Document Chunks
Modify uploadFile() to inspect document chunks:
console.log(allSplits[0]);🤝 Contributing
Contributions are welcome! Feel free to submit issues and pull requests to improve this library.
new upgrades
LangChain Conversational QA Agent
A powerful conversational question-answering system built on LangGraph and LangChain that maintains conversation history and performs retrieval-augmented generation (RAG).
Features
- Persistent Conversation History: Maintains context across multiple queries
- Retrieval-Augmented Generation: Enhances responses with information from your data sources
- Customizable LLM Integration: Supports OpenAI models by default with easy configuration
- Stateful Execution: Preserves conversations between API calls
- Express API Support: Ready to integrate with Express for web applications
Installation
npm install langchain-conversational-qaEnvironment Setup
Create a .env file with your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4 # Optional, defaults to gpt-4Quick Start
Basic Usage
import { ConversationalAgent } from 'langchain-conversational-qa';
// Create a conversational agent instance
const agent = new ConversationalAgent();
// Ask questions
const response = await agent.query("What is LangChain?");
console.log(response);
// Follow-up questions maintain context
const followUp = await agent.query("Can you give me an example?");
console.log(followUp);Customizing the Agent
// Create an agent with custom options
const customAgent = new ConversationalAgent({
model: "gpt-4-turbo",
apiKey: "your-openai-api-key",
minHistorySize: 15 // Keep at least 15 conversation turns
});Express API Integration
import express from 'express';
import { ConversationalAgent } from 'langchain-conversational-qa';
const app = express();
const PORT = process.env.PORT || 3000;
const agent = new ConversationalAgent();
app.use(express.json());
app.post('/api/query', async (req, res) => {
try {
const { query } = req.body;
const response = await agent.query(query);
res.json({ answer: response });
} catch (error) {
console.error(error);
res.status(500).json({ error: 'An error occurred' });
}
});
app.post('/api/reset', async (req, res) => {
try {
await agent.resetConversation();
res.json({ status: 'Conversation history reset' });
} catch (error) {
console.error(error);
res.status(500).json({ error: 'Failed to reset conversation' });
}
});
app.get('/api/history', async (req, res) => {
try {
const history = await agent.getConversationHistory();
res.json({ history });
} catch (error) {
console.error(error);
res.status(500).json({ error: 'Failed to retrieve conversation history' });
}
});
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});Maintaining Conversation History
The agent automatically maintains conversation history across multiple queries as long as you're using the same agent instance. This is particularly important in server environments where you need to create the agent once, outside of your request handlers.
// CORRECT: Create once at server startup
const agent = new ConversationalAgent();
// INCORRECT: Creating a new agent for each request will lose history
app.post('/api/query', async (req, res) => {
const agent = new ConversationalAgent(); // Don't do this!
// ...
});API Reference
ConversationalAgent
Constructor
new ConversationalAgent(options?)Options:
model: OpenAI model to use (default:"gpt-4"or value fromOPENAI_MODELenv variable)apiKey: OpenAI API key (default: value fromOPENAI_API_KEYenv variable)minHistorySize: Minimum conversation turns to maintain (default:10)
Methods
query(question: string): Process a question and return the answerresetConversation(): Clear conversation historygetConversationHistory(): Get the current conversation history
createExecutor
Creates a stateful executor function that can be used for standalone processing.
const executor = createExecutor();
const result = await executor("What is LangChain?");How It Works
The library uses LangGraph to define a processing workflow:
initialize_history: Sets up the conversation historyretrieve: Fetches relevant context from your data sourcesgenerate: Creates a response using the LLM, context, and conversation history
The conversation history is maintained between calls by storing it within the agent instance.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT
📜 License
This project is licensed under the MIT License.
