@dooor-ai/toolkit
v0.1.61
Published
Guards, Evals & Observability for AI applications - works seamlessly with LangChain/LangGraph
Maintainers
Readme
DOOOR AI Toolkit
██████╗ ██████╗ ██████╗ ██████╗ ██████╗
██╔══██╗██╔═══██╗██╔═══██╗██╔═══██╗██╔══██╗
██║ ██║██║ ██║██║ ██║██║ ██║██████╔╝
██║ ██║██║ ██║██║ ██║██║ ██║██╔══██╗
██████╔╝╚██████╔╝╚██████╔╝╚██████╔╝██║ ██║
╚═════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝Guards, Evals & Observability for AI Applications
An all-in-one framework for securing, evaluating, and monitoring AI applications. Works seamlessly with LangChain/LangGraph.
Installation
npm install @dooor-ai/toolkitRequires @langchain/core (0.3.x or 1.x) as peer dependency.
What's New in v0.1.61
- Documented
logTracefunction: Comprehensive docs for manual trace logging without LangChain — works with OpenAI SDK, Anthropic SDK, Gemini SDK, or any custom HTTP LLM call - Session support examples: How to group traces into conversations using
sessionId - TraceData reference table: Full field-by-field documentation
Quick Start
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { dooorChatGuard } from "@dooor-ai/toolkit";
import { PromptInjectionGuard, ToxicityGuard } from "@dooor-ai/toolkit/guards";
import { LatencyEval, CostEval } from "@dooor-ai/toolkit/evals";
// 1. Create your LangChain provider normally
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
// 2. Instrument with DOOOR Toolkit
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_api_key@your-host:8000/my_evals",
providerName: "gemini",
guards: [
new PromptInjectionGuard({ threshold: 0.8 }),
new ToxicityGuard({ threshold: 0.7 })
],
evals: [
new LatencyEval({ threshold: 3000 }),
new CostEval({ budgetLimitUsd: 0.10 })
],
observability: true,
});
// 3. Use normally - Guards + Evals work automatically
const response = await llm.invoke("What is the capital of France?");Features
Guards (Pre-execution)
Protect your LLM from malicious inputs before they reach the model:
- PromptInjectionGuard - Detects jailbreak attempts and prompt injection
- ToxicityGuard - Blocks toxic/offensive content via AI moderation
- PIIGuard - Detects and masks personal information
Evals (Post-execution)
Evaluate response quality automatically:
- LatencyEval - Track response time and alert on slow responses
- CostEval - Monitor costs and alert when budget limits are exceeded
- RelevanceEval - Measure answer relevance (coming soon)
- HallucinationEval - Detect hallucinations (coming soon)
Observability
Full visibility into your LLM calls:
- Automatic tracing with unique trace IDs
- Latency and token tracking
- Cost estimation per request
- Guards and evals results logging
- CortexDB integration for persistent storage
Manual Trace Logging (logTrace)
If you're not using LangChain or need to log traces from custom LLM calls (raw HTTP, OpenAI SDK, Anthropic SDK, etc.), use the logTrace function to send traces directly to CortexDB Harbor.
Setup
import { configureCortexDBFromConnectionString, logTrace } from "@dooor-ai/toolkit";
// Configure once at app startup
configureCortexDBFromConnectionString(
"cortexdb://YOUR_API_KEY@YOUR_HOST:8000/YOUR_DATABASE"
);Or use an environment variable:
configureCortexDBFromConnectionString(process.env.CORTEXDB_CONNECTION!);Basic Usage
import { logTrace } from "@dooor-ai/toolkit";
// After any LLM call, log the trace:
const startedAt = Date.now();
const response = await callYourLLM(prompt);
const latency = Date.now() - startedAt;
await logTrace(
{
input: prompt,
output: response.text,
model: "gpt-4o",
latency,
tokens: {
prompt: response.usage.prompt_tokens,
completion: response.usage.completion_tokens,
total: response.usage.total_tokens,
},
timestamp: new Date(),
},
{
project: "my-app", // Shows up in Harbor dashboard
}
);With OpenAI SDK
import OpenAI from "openai";
import { configureCortexDBFromConnectionString, logTrace } from "@dooor-ai/toolkit";
configureCortexDBFromConnectionString(process.env.CORTEXDB_CONNECTION!);
const openai = new OpenAI();
async function chat(userMessage: string) {
const startedAt = Date.now();
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userMessage }],
});
const output = completion.choices[0]?.message?.content || "";
await logTrace(
{
input: userMessage,
output,
model: "gpt-4o",
latency: Date.now() - startedAt,
tokens: {
prompt: completion.usage?.prompt_tokens || 0,
completion: completion.usage?.completion_tokens || 0,
total: completion.usage?.total_tokens || 0,
},
timestamp: new Date(),
},
{ project: "my-openai-app" }
);
return output;
}With Anthropic SDK
import Anthropic from "@anthropic-ai/sdk";
import { configureCortexDBFromConnectionString, logTrace } from "@dooor-ai/toolkit";
configureCortexDBFromConnectionString(process.env.CORTEXDB_CONNECTION!);
const anthropic = new Anthropic();
async function chat(userMessage: string) {
const startedAt = Date.now();
const message = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: userMessage }],
});
const output = message.content[0].type === "text" ? message.content[0].text : "";
await logTrace(
{
input: userMessage,
output,
model: "claude-sonnet-4-20250514",
latency: Date.now() - startedAt,
tokens: {
prompt: message.usage.input_tokens,
completion: message.usage.output_tokens,
total: message.usage.input_tokens + message.usage.output_tokens,
},
timestamp: new Date(),
},
{ project: "my-anthropic-app" }
);
return output;
}With Google Gemini SDK
import { GoogleGenerativeAI } from "@google/generative-ai";
import { configureCortexDBFromConnectionString, logTrace } from "@dooor-ai/toolkit";
configureCortexDBFromConnectionString(process.env.CORTEXDB_CONNECTION!);
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
async function chat(userMessage: string) {
const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });
const startedAt = Date.now();
const result = await model.generateContent(userMessage);
const output = result.response.text();
const usage = result.response.usageMetadata;
await logTrace(
{
input: userMessage,
output,
model: "gemini-2.0-flash",
latency: Date.now() - startedAt,
tokens: {
prompt: usage?.promptTokenCount || 0,
completion: usage?.candidatesTokenCount || 0,
total: usage?.totalTokenCount || 0,
},
timestamp: new Date(),
},
{ project: "my-gemini-app" }
);
return output;
}With Sessions (Conversations)
Group related traces into a conversation using sessionId:
import { v4 as uuidv4 } from "uuid";
import { logTrace } from "@dooor-ai/toolkit";
const sessionId = uuidv4(); // One per conversation
// First message
await logTrace(
{
input: "Hello!",
output: "Hi there! How can I help?",
model: "gpt-4o",
latency: 230,
sessionId,
timestamp: new Date(),
},
{ project: "my-chatbot" }
);
// Second message (same session)
await logTrace(
{
input: "What's the weather?",
output: "I don't have access to real-time weather data.",
model: "gpt-4o",
latency: 180,
sessionId,
timestamp: new Date(),
},
{ project: "my-chatbot" }
);TraceData Reference
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| input | string | Yes | The prompt/input sent to the LLM |
| output | string | No | The LLM response |
| model | string | Yes | Model name (e.g., "gpt-4o", "claude-sonnet-4-20250514") |
| timestamp | Date | Yes | When the call was made |
| latency | number | No | Response time in milliseconds |
| tokens | object | No | Token counts: { prompt, completion, total } |
| cost | number | No | Estimated cost in USD |
| traceId | string | No | Unique trace ID (auto-generated if omitted) |
| sessionId | string | No | Session/conversation ID for grouping traces |
| toolCalls | array | No | Tool/function calls made during execution |
LogTraceOptions
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| project | string | undefined | Project name shown in Harbor dashboard |
| enabled | boolean | true | Set to false to disable logging |
| backend | ObservabilityBackend | Auto-detected | Custom backend (CortexDB or Console) |
| collector | ObservabilityCollector | Auto-created | Reuse an existing collector instance |
Fallback Behavior
If the trace fails to send to CortexDB (network error, misconfiguration), logTrace automatically falls back to console logging so you never lose trace data.
Provider Support
Works with ANY LangChain provider:
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { dooorChatGuard } from "@dooor-ai/toolkit";
// OpenAI
const openai = dooorChatGuard(new ChatOpenAI({...}), toolkitConfig);
// Anthropic
const claude = dooorChatGuard(new ChatAnthropic({...}), toolkitConfig);
// Google
const gemini = dooorChatGuard(new ChatGoogleGenerativeAI({...}), toolkitConfig);LangGraph Integration
import { StateGraph } from "@langchain/langgraph";
import { dooorChatGuard } from "@dooor-ai/toolkit";
const llm = dooorChatGuard(baseProvider, toolkitConfig);
const workflow = new StateGraph(...)
.addNode("agent", async (state) => {
const response = await llm.invoke(state.messages);
return { messages: [response] };
});
// Guards + Evals work automatically via callbacksConfiguration
Toolkit Config
interface ToolkitConfig {
// CortexDB connection string (optional, for AI proxy and observability)
apiKey?: string;
// AI Provider name from CortexDB Studio (optional, for AI-based guards)
providerName?: string;
// Guards to apply (run before LLM)
guards?: Guard[];
// Evals to apply (run after LLM)
evals?: Eval[];
// Output guards (validate LLM output)
outputGuards?: Guard[];
// Enable observability (default: true)
observability?: boolean;
// Eval execution mode: "async" | "sync" | "sample"
evalMode?: string;
// Sample rate for evals (0-1, default: 1.0)
evalSampleRate?: number;
// Guard failure mode: "throw" | "return_error" | "log_only"
guardFailureMode?: string;
// Project name for tracing
project?: string;
}Guard Configuration
new PromptInjectionGuard({
threshold: 0.8,
blockOnDetection: true,
enabled: true,
})
new ToxicityGuard({
threshold: 0.7,
providerName: "gemini", // Optional: override global providerName
categories: ["hate", "violence", "sexual", "harassment"],
})
new PIIGuard({
detectTypes: ["email", "phone", "cpf", "credit_card", "ssn"],
action: "mask", // "mask" | "block" | "warn"
})Eval Configuration
new LatencyEval({
threshold: 3000, // Alert if > 3s
})
new CostEval({
budgetLimitUsd: 0.10, // Alert if > $0.10
alertOnExceed: true,
})CortexDB Integration
For the best experience, use with CortexDB:
Benefits:
- AI Provider Proxy (no API keys in your code for guards)
- Centralized AI Provider management
- Automatic trace storage in Postgres
- Dashboard UI for observability
- Self-hosted and open-source
Connection String Format:
cortexdb://api_key@host[:port]/databaseIf the port is omitted, the SDK defaults to HTTPS for non-local hosts.
Example:
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://[email protected]:8000/my_app",
providerName: "gemini", // Configured in CortexDB Studio
guards: [new ToxicityGuard()], // Uses Gemini via CortexDB proxy
});RAG (Retrieval-Augmented Generation)
The toolkit provides ephemeral RAG capabilities via CortexDB for ad-hoc document retrieval without creating permanent collections.
Quick Start
import { RAGContext, RAGStrategy, buildRAGPrompt } from "@dooor-ai/toolkit/rag";
import { dooorChatGuard } from "@dooor-ai/toolkit";
// Create RAG context with documents
const ragContext = new RAGContext({
documents: [
{
content: "NestJS authentication guide: Use @nestjs/passport with JWT...",
metadata: { source: "docs" }
},
{
content: "Database setup: Install @prisma/client and configure...",
metadata: { source: "tutorial" }
}
],
embeddingProvider: "prod-gemini", // From CortexDB Studio
strategy: RAGStrategy.SIMPLE,
topK: 3,
chunkSize: 500,
chunkOverlap: 100,
});
// Build prompt with RAG context
const userQuery = "How to authenticate users in NestJS?";
const promptWithContext = await buildRAGPrompt(userQuery, ragContext);
// Use with your LLM
const llm = dooorChatGuard(baseProvider, {...});
const response = await llm.invoke(promptWithContext);RAG Strategies
1. SIMPLE (Default - Fastest)
Direct semantic search using cosine similarity.
- ⚡ Speed: Fastest (1 embedding)
- 💰 Cost: Lowest
- 🎯 Best for: Direct questions, technical docs, FAQs
strategy: RAGStrategy.SIMPLE2. HYDE (Hypothetical Document Embeddings)
Generates a hypothetical answer first, then searches for similar chunks.
- ⚡ Speed: Medium (1 LLM call + 2 embeddings)
- 💰 Cost: Medium
- 🎯 Best for: Complex queries, conceptual questions
How it works:
- LLM generates hypothetical answer to your query
- Embeds the hypothetical answer (not the query!)
- Searches for chunks similar to the answer
strategy: RAGStrategy.HYDEExample:
Query: "How to authenticate users?"
↓
LLM: "To authenticate users, use JWT tokens with passport..."
↓
Embed this answer → Search similar chunks3. RERANK (LLM-based Re-ranking)
Retrieves more candidates, then uses LLM to rerank by relevance.
- ⚡ Speed: Slower (1 embedding + 1 LLM rerank)
- 💰 Cost: Medium
- 🎯 Best for: Maximum precision, ambiguous queries
How it works:
- Semantic search returns top_k × 3 candidates
- LLM analyzes all and ranks by relevance
- Returns top_k most relevant
strategy: RAGStrategy.RERANK4. FUSION (SIMPLE + HYDE Combined)
Runs SIMPLE and HYDE in parallel, combines results using Reciprocal Rank Fusion.
- ⚡ Speed: Medium (parallel execution)
- 💰 Cost: Highest (combines both strategies)
- 🎯 Best for: Critical queries, maximum quality
How it works:
- Runs SIMPLE and HYDE simultaneously
- Combines using RRF:
score = 1/(rank_simple + 60) + 1/(rank_hyde + 60) - Returns top_k with highest fusion scores
strategy: RAGStrategy.FUSIONStrategy Comparison
| Strategy | Speed | Cost | Precision | Best For | |----------|-------|------|-----------|----------| | SIMPLE | ⚡⚡⚡ | 💰 | ⭐⭐⭐ | Direct questions | | HYDE | ⚡⚡ | 💰💰 | ⭐⭐⭐⭐ | Complex queries | | RERANK | ⚡ | 💰💰 | ⭐⭐⭐⭐⭐ | Maximum precision | | FUSION | ⚡⚡ | 💰💰💰 | ⭐⭐⭐⭐⭐ | Critical queries |
RAG with Files
const ragContext = new RAGContext({
files: [
{
name: "manual.pdf",
data: base64EncodedPDF,
type: "application/pdf"
}
],
embeddingProvider: "prod-gemini",
strategy: RAGStrategy.HYDE,
topK: 5,
});Supported file types: PDF, DOCX, MD, TXT
RAG Observability
All RAG calls are automatically logged to CortexDB:
- Embedding tokens used
- Chunks retrieved vs total
- Strategy used
- Timing breakdown (parse, embedding, search)
- Similarity scores
View in CortexDB Studio → Observability → RAG tab
Real-World Examples
NestJS Integration
Perfect for building AI-powered APIs with guards, evals, and RAG:
import { Injectable, Logger } from '@nestjs/common';
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import {
dooorChatGuard,
PromptInjectionGuard,
ToxicityGuard,
LatencyEval,
AnswerRelevancyEval,
RAGContext,
RAGStrategy,
buildRAGPrompt,
} from "@dooor-ai/toolkit";
@Injectable()
export class AIService {
private readonly logger = new Logger(AIService.name);
/**
* Simple LLM with Guards + Evals
*/
async askQuestion(question: string) {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
project: "my-api",
guards: [
new PromptInjectionGuard({ threshold: 0.8 }),
new ToxicityGuard({ threshold: 0.7 }),
],
evals: [
new LatencyEval({ threshold: 3000 }),
new AnswerRelevancyEval({ threshold: 0.7 }),
],
observability: true,
});
const result = await llm.invoke([
{ role: "user", content: question }
]);
return result;
}
/**
* LangGraph Agent with Tools
*/
async runAgent(userMessage: string) {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
project: "agent-api",
guards: [new PromptInjectionGuard()],
evals: [new LatencyEval({ threshold: 5000 })],
observability: true,
});
const agent = createReactAgent({
llm: llm,
tools: [], // Your tools here
prompt: `You are a helpful assistant.`,
});
const result = await agent.invoke({
messages: [{ role: "user", content: userMessage }]
});
return result;
}
/**
* RAG with Documents (No files needed)
*/
async ragWithDocuments(query: string) {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
project: "rag-api",
guards: [new PromptInjectionGuard()],
evals: [new LatencyEval({ threshold: 5000 })],
observability: true,
});
// Create RAG context with plain text documents
const ragContext = new RAGContext({
documents: [
{
content: `
NestJS Authentication Guide:
To authenticate users in NestJS:
1. Install @nestjs/passport and passport-jwt
2. Create an AuthModule with JwtStrategy
3. Use @UseGuards(JwtAuthGuard) on protected routes
4. Store JWT token in Authorization header as "Bearer <token>"
Example:
@Post('login')
async login(@Body() loginDto: LoginDto) {
const user = await this.authService.validateUser(loginDto);
return this.authService.generateJwt(user);
}
`,
metadata: { source: 'nestjs-auth-docs' }
},
{
content: `
Database Configuration in NestJS:
Use Prisma for type-safe database access:
1. Install @prisma/client
2. Define schema in prisma/schema.prisma
3. Run npx prisma migrate dev
4. Inject PrismaService in your services
`,
metadata: { source: 'nestjs-database-docs' }
},
],
embeddingProvider: "prod-gemini",
strategy: RAGStrategy.SIMPLE,
topK: 3,
chunkSize: 500,
chunkOverlap: 100,
});
// Build prompt with RAG context
const promptWithContext = await buildRAGPrompt(query, ragContext);
this.logger.log(`🔍 RAG Query: ${query}`);
this.logger.log(`📄 Processing ${ragContext.documents.length} documents`);
const result = await llm.invoke([
{ role: "user", content: promptWithContext }
]);
this.logger.log('✅ RAG Response received!');
return result;
}
/**
* RAG with PDF File
*/
async ragWithPdf(query: string, pdfPath: string) {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
project: "rag-pdf-api",
guards: [],
evals: [new LatencyEval({ threshold: 10000 })],
observability: true,
});
// Read PDF file
const fs = require('fs').promises;
const pdfBuffer = await fs.readFile(pdfPath);
this.logger.log(`📄 PDF loaded: ${pdfPath} (${pdfBuffer.length} bytes)`);
// Create RAG context with PDF
const ragContext = new RAGContext({
files: [
{
name: "manual.pdf",
data: pdfBuffer,
type: "application/pdf"
}
],
embeddingProvider: "prod-gemini",
strategy: RAGStrategy.HYDE, // HyDE for complex queries
topK: 5,
chunkSize: 1000,
chunkOverlap: 200,
});
const promptWithContext = await buildRAGPrompt(query, ragContext);
this.logger.log(`🔍 RAG Query: ${query}`);
this.logger.log('📊 Strategy: HyDE');
const result = await llm.invoke([
{ role: "user", content: promptWithContext }
]);
this.logger.log('✅ RAG with PDF completed!');
return result;
}
/**
* RAG with FUSION Strategy (Best Quality)
*/
async ragFusion(query: string) {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
temperature: 0,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
project: "rag-fusion-api",
guards: [],
evals: [new LatencyEval({ threshold: 15000 })],
observability: true,
});
const ragContext = new RAGContext({
documents: [
{
content: "Microservices architecture divides applications into small, independent services.",
metadata: { source: "microservices-101" }
},
{
content: "Monolithic architecture keeps all application logic in a single codebase.",
metadata: { source: "monolith-101" }
},
{
content: "Service mesh provides observability and traffic management for microservices.",
metadata: { source: "service-mesh-guide" }
}
],
embeddingProvider: "prod-gemini",
strategy: RAGStrategy.FUSION, // FUSION = SIMPLE + HYDE combined
topK: 5,
});
const promptWithContext = await buildRAGPrompt(query, ragContext);
this.logger.log(`🔍 RAG Query: ${query}`);
this.logger.log('📊 Strategy: FUSION (SIMPLE + HYDE combined)');
const result = await llm.invoke([
{ role: "user", content: promptWithContext }
]);
this.logger.log('✅ RAG with FUSION strategy completed!');
return result;
}
}Express.js API
import express from 'express';
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { dooorChatGuard, PromptInjectionGuard, LatencyEval } from "@dooor-ai/toolkit";
const app = express();
app.use(express.json());
app.post('/api/chat', async (req, res) => {
const { message } = req.body;
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
guards: [new PromptInjectionGuard()],
evals: [new LatencyEval({ threshold: 3000 })],
observability: true,
});
try {
const result = await llm.invoke([{ role: "user", content: message }]);
res.json({ response: result.content });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => console.log('Server running on port 3000'));Standalone Script
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { dooorChatGuard, PromptInjectionGuard, LatencyEval } from "@dooor-ai/toolkit";
async function main() {
const baseProvider = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash-exp",
apiKey: process.env.GEMINI_API_KEY,
});
const llm = dooorChatGuard(baseProvider, {
apiKey: "cortexdb://your_key@host:8000/my_db",
providerName: "gemini",
guards: [new PromptInjectionGuard()],
evals: [new LatencyEval({ threshold: 3000 })],
observability: true,
});
const result = await llm.invoke([
{ role: "user", content: "What is the capital of France?" }
]);
console.log(result.content);
}
main();Additional Examples
See examples/ directory in the repository:
basic-usage.ts- Complete example with all featuresmulti-provider.ts- Using different LangChain providerssimple-usage.ts- Minimal setup
Development Status
Current: MVP (Phase 1) - Complete
Completed Features:
- Core callback handler with lifecycle hooks
- Provider-agnostic factory function
- PromptInjectionGuard, ToxicityGuard, PIIGuard
- LatencyEval, CostEval
- CortexDB integration
- Console and CortexDB observability backends
Roadmap (Phase 2):
- RelevanceEval (LLM-based quality scoring)
- HallucinationEval (detect false information)
- Dashboard UI in CortexDB Studio
- Python SDK
- Additional guards and evals
License
MIT
Links
- NPM: https://www.npmjs.com/package/@dooor-ai/toolkit
- GitHub: https://github.com/dooor-ai/toolkit
- Documentation: https://github.com/dooor-ai/toolkit/docs
- Issues: https://github.com/dooor-ai/toolkit/issues
