crash-ai
v1.0.8
Published
Ai By Fauzi
Downloads
723
Readme
Crash-AI: Agent RAG
Crash-AI is a library specifically designed for building RAG (Retrieval-Augmented Generation)-based bot agents. This library simplifies integration between OpenAI, PostgreSQL Vector, Langchain, and website data scraping for automated bot needs.
Features
- ✅ Ingestor Scrapping: Extract data from business websites directly into the Vector database.
- ✅ Ingestor Document: Extract data from business documents directly into the Vector database.
- ✅ Smart RAG: Search for relevant information based on a knowledge database.
- ✅ Multi-Agent Routing: Intelligently redirect user queries to the appropriate agent.
- ✅ Metadata Mapping: Automatically map product images based on keywords in the content.
- ✅ Models Context Protocol: Create your own MCP (Models Context Protocol) that can connect to multiple servers that produce tools for execution.
- ✅ Agent: Create your own agent that can execute MCP tools independently.
- ✅ History Cache: Automatically map history using Redis.
Diagaram

Install
Node v.22 ++
npm i crash-aiEmbbedings Content Web
import "dotenv/config";
import { ingestWeb } from "crash-ai";
import pg from "../Config/PgDB.js";
const ApiKey = process.env.OpenKey;
//Master Mapping Image Label dan Keywords
const { rows: masters } = await pg.query(
"SELECT label, keywords FROM cs_rag.agent_image",
);
await ingestWeb(pg, ApiKey, masters, {
TagSelector: "p, h1, h2, h3, h4", // Selector for the HTML element to be retrieved
Url: "https://", //URL to be ingested
ChunkSize: 500, //Size chunk teks
chunkOverlap: 100, //Overlap chunk
tableName: "cs_rag.agent_knowledge", // The name of the destination table for storing embeddings
});Embbedings Document
import "dotenv/config";
import { ingestDoc } from "crash-ai";
import pg from "../Config/PgDB.js";
const ApiKey = process.env.OpenKey;
//Master Mapping Image Label dan Keywords
const { rows: masters } = await pg.query(
"SELECT label, keywords FROM cs_rag.agent_image",
);
await ingestDoc(pg, ApiKey, masters, {
separators: ["***"] || ["###", "\n\n", "\r\n", "\n", "", " "], // Sperators
File: "doc.pdf / .txt", //Document to be ingested
ChunkSize: 500, //Size chunk teks
chunkOverlap: 100, //Overlap chunk
tableName: "cs_rag.agent_knowledge", // The name of the destination table for storing embeddings
});Create Agent
import { CrashAI } from "crash-ai";
export class SupportAgent extends CrashAI.CreateAgent {
async handle(question, history) {
const { Sequence, Passthrough } = CrashAI.Runnables;
const { StringParser, PromptTemplateAi } = CrashAI.Utils;
const prompt = PromptTemplateAi.fromTemplate(`
CONVERSATION HISTORY:
{chat_history}
CONTEXT (Hotel Internal Data):
{context}
IMAGE DATA:
Image Key: {img_key}
QUESTION: {question}
ANSWER RULES:
ANSWER:`);
const retrieverInstance = await this.retriever;
const docs = await retrieverInstance.invoke(question);
const keywords = ["foto", "picture", "see", "type"]; //Triggers Keyword Send Image
const isAskingImage = keywords.some((word) =>
question.toLowerCase().includes(word),
);
const docWithImage = docs.find((d) => d.metadata && d.metadata.img_key);
const detectedImgKey = isAskingImage
? docWithImage?.metadata?.img_key || null
: null;
const contextText = docs.map((d) => d.pageContent).join("\n\n");
const chatChain = Sequence.from([
{
context: () => contextText,
question: new Passthrough(),
chat_history: () => history,
img_key: () => detectedImgKey,
},
prompt,
this.model,
new StringParser(),
]);
const response = await chatChain.invoke(question);
const imageMatch = response.match(/\[SHOW_IMAGE:(.*?)\]/);
const finalAnswer = response.replace(/\[SHOW_IMAGE:.*?\]/, "").trim();
//Standart Return Class Agent
return {
answer: finalAnswer,
imgKey: imageMatch ? imageMatch[1] : detectedImgKey || null,
};
}
}Create Models Context Protocol And Agents
Please read at https://docs.langchain.com/oss/javascript/langchain/mcp , for more detailed configuration information.
import { CrashAI, CreatedProtocol, CreatedAgent } from "crash-ai";
const ProtocolMysql = await new CreatedProtocol(
"mysql_server", // Name Server
"npx", // Command Excute
[
"-y",
"@berthojoris/mcp-mysql-server",
"mysql://user:[email protected]:3307/databasename",
"list,read,utility,create,update,ddl",
],
["list_tables", "describe_table", "execute_query", "query"],
).use();
const AgentTransaction = new CreatedAgent(
"Agent Booking", //Name Agent
"gpt-3.5-turbo", // Models
ProtocolMysql, // Tools
OpenKey, // Key API
);
const ResponseAgent = await AgentTransaction.run(
question,
history,
"intent",
`
PROMPT YOUT AGENT
`,
);
console.log("Transaction Agent Full Response:", ResponseAgent);
console.log(
"Transaction Agent Response:",
ResponseAgent.messages[ResponseAgent.messages.length - 1].content,
);
//Standart Return Class Agent
return {
answer: ResponseAgent.messages[ResponseAgent.messages.length - 1].content,
imgKey: null,
};Example
import { CrashAI, HistoryStore } from "crash-ai";
import { SupportAgent } from "../Utils/SupportAgent.js";
import { ReservationAgent } from "../Utils/ReservationAgent.js";
//Init HistoryStore Redis
await HistoryStore.initConnection("redis://127.0.0.1:6379");
const BotAi = new CrashAI({
ModelName: "gpt-4o-mini",
ApiKey: process.env.OpenKey,
Temperature: 0,
tablename: "cs_rag.agent_knowledge",
pool: "Pool Database",
intentData: [
"transaction",
"information",
"complaint",
"cancellation",
"question",
"other",
],
agentMapping: {
transaction: ReservationAgent,
information: SupportAgent,
question: SupportAgent,
other: SupportAgent,
},
});
const historyStore = new HistoryStore(id_redis, "Jhondoe");
let convHistory = await historyStore.getMessages();
const history = convHistory.slice(-6);
console.log("Conversation History:", convHistory);
const question = "Where is the location?";
const { answer, imgKey } = await BotAi.GenerateAnswer(question, history);
if (answer) {
await HistoryStore.addMessage(id_redis, "Jhondoe", question, "User");
await HistoryStore.addMessage(id_redis, "Jhondoe", answer, "Ai");
console.log(answer); // Answer
if (imgKey) {
const result = await getImageBylabel(imgKey); //Fetch label agent_image
if (result) {
console.log(result.image_url); // Url Picture
}
}
}Database Setup
This library requires PostgreSQL with the pgvector extension. Run the following SQL command before starting:
CREATE EXTENSION IF NOT EXISTS vector;
CREATE SCHEMA IF NOT EXISTS cs_rag;
CREATE TABLE cs_rag.agent_knowledge (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
content TEXT,
metadata jsonb,
embedding vector(1536)
);
CREATE TABLE cs_rag.agent_image (
id SERIAL PRIMARY KEY,
label VARCHAR(255) UNIQUE,
keywords TEXT[],
image_url TEXT
);
addition
Important Actually, you don't need to bother learning how langchain works because this tool is already provided, but I hope you understand a little about the lifecycle of langchain working, using Redis and Postgree as the basis for vector data and history. I hope you can understand the basic settings for Redis and Postgree SQL, thank you.
