@m6d/cortex-server
v1.7.0
Published
Reusable AI agent chat server library for Hono + Bun
Downloads
1,003
Readme
@m6d/cortex-server
Multi-agent AI chat server built on Hono + Bun. Supports MSSQL persistence, MinIO file storage, Neo4j knowledge graphs, and AI streaming with tool execution.
Usage
import { createCortex } from "@m6d/cortex-server";
const cortex = createCortex({
database: {
type: "mssql",
connectionString: process.env.DB_CONNECTION_STRING!,
},
storage: {
endPoint: "localhost",
port: 9000,
useSSL: false,
accessKey: "minioadmin",
secretKey: "minioadmin",
},
auth: {
jwksUri: "https://your-auth-provider/.well-known/jwks.json",
issuer: "https://your-auth-provider/",
},
model: {
baseURL: "https://api.openai.com/v1",
apiKey: process.env.MODEL_KEY!,
modelName: "gpt-4o",
},
embedding: {
baseURL: "https://api.openai.com/v1",
apiKey: process.env.EMBEDDING_KEY!,
modelName: "text-embedding-3-small",
dimension: 1536,
},
neo4j: {
url: "bolt://localhost:7687",
user: "neo4j",
password: "password",
},
agents: {
assistant: {
systemPrompt: "You are a helpful assistant.",
tools: {},
},
},
});
const server = await cortex.serve();
export default {
fetch: server.fetch,
websocket: server.websocket,
};cortex.serve() is async — it runs database migrations on startup.
Each key in agents becomes a route prefix (e.g. assistant → /agents/assistant/...). Agents can define per-agent systemPrompt, tools, backendFetch, loadSessionData, resolveRequestContext, and lifecycle hooks (onToolCall, onStreamFinish).
Requirements
- Bun >= 1.0.0
