@quarry-systems/drift-ai-core
v0.3.0-alpha.3
Published
AI agent core functionality for Drift (Managed Cyclic Graph)
Readme
@quarry-systems/drift-ai-core
AI agent core functionality for Drift (Managed Cyclic Graph) - build intelligent workflows with LLM integration.
Overview
Drift AI Core provides utilities and helpers for building AI-powered workflows using managed cyclic graphs. It includes LLM integration helpers, tool calling utilities, schema generation, and runtime type validation.
Installation
npm install @quarry-systems/drift-ai-core @quarry-systems/drift-core @quarry-systems/drift-contractsFeatures
- ✅ LLM Integration: Helpers for working with language models
- ✅ Tool Calling: Define and execute tools for LLM function calling
- ✅ Schema Generation: Convert TypeScript types to JSON schemas
- ✅ Runtime Validation: Validate LLM outputs against schemas
- ✅ Streaming Support: Handle streaming LLM responses
- ✅ Message Formatting: Utilities for chat message construction
- ✅ Response Parsing: Extract structured data from LLM responses
- ✅ Error Handling: Robust error handling for AI operations
Quick Start
Basic LLM Integration
import { ManagedCyclicGraph } from '@quarry-systems/drift-core';
import { createLLMNode, formatMessages } from '@quarry-systems/drift-ai-core';
const graph = new ManagedCyclicGraph('ai-workflow')
.node('chat', {
label: 'Chat with LLM',
execute: [async (ctx) => {
const messages = formatMessages([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: ctx.data.userQuery }
]);
// Use your LLM adapter
const response = await ctx.services.llm.chat(messages);
return {
...ctx,
data: {
...ctx.data,
response: response.content
}
};
}]
})
.build();Tool Calling
import { defineTool, executeToolCall } from '@quarry-systems/drift-ai-core';
// Define tools
const weatherTool = defineTool({
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' },
units: { type: 'string', enum: ['celsius', 'fahrenheit'] }
},
required: ['location']
},
handler: async ({ location, units = 'celsius' }) => {
// Fetch weather data
return { temp: 22, units, location };
}
});
// Use in graph
const graph = new ManagedCyclicGraph('tool-calling')
.node('llm-with-tools', {
execute: [async (ctx) => {
const response = await ctx.services.llm.chat(
formatMessages([{ role: 'user', content: 'What\'s the weather in Paris?' }]),
{ tools: [weatherTool] }
);
// Execute tool calls
if (response.toolCalls) {
const results = await Promise.all(
response.toolCalls.map(call => executeToolCall(call, [weatherTool]))
);
return { ...ctx, data: { ...ctx.data, toolResults: results } };
}
return ctx;
}]
})
.build();Schema Generation
import { generateSchema, validateAgainstSchema } from '@quarry-systems/drift-ai-core';
// Define expected output structure
interface UserProfile {
name: string;
age: number;
email: string;
interests: string[];
}
// Generate JSON schema
const schema = generateSchema<UserProfile>({
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number', minimum: 0 },
email: { type: 'string', format: 'email' },
interests: { type: 'array', items: { type: 'string' } }
},
required: ['name', 'age', 'email']
});
// Request structured output from LLM
const response = await llm.chat(messages, {
responseFormat: { type: 'json_schema', schema }
});
// Validate response
const validation = validateAgainstSchema(response.content, schema);
if (validation.valid) {
const profile: UserProfile = validation.data;
// Use typed data
}Core Utilities
Message Formatting
import { formatMessages, addSystemMessage, addUserMessage } from '@quarry-systems/drift-ai-core';
// Format message array
const messages = formatMessages([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: 'Hi! How can I help?' },
{ role: 'user', content: 'Tell me about graphs.' }
]);
// Helper functions
const withSystem = addSystemMessage(messages, 'Be concise.');
const withUser = addUserMessage(messages, 'What is 2+2?');Response Parsing
import { parseJSONResponse, extractCodeBlocks } from '@quarry-systems/drift-ai-core';
// Parse JSON from LLM response
const data = parseJSONResponse(response.content);
// Extract code blocks
const codeBlocks = extractCodeBlocks(response.content);
// Returns: [{ language: 'typescript', code: '...' }, ...]Streaming Helpers
import { handleStreamingResponse } from '@quarry-systems/drift-ai-core';
const stream = await llm.chatStream(messages);
await handleStreamingResponse(stream, {
onToken: (token) => console.log(token),
onComplete: (fullText) => console.log('Complete:', fullText),
onError: (error) => console.error('Error:', error)
});Tool Execution
import { defineTool, executeToolCall, validateToolCall } from '@quarry-systems/drift-ai-core';
// Define a tool
const calculator = defineTool({
name: 'calculate',
description: 'Perform basic math operations',
parameters: {
type: 'object',
properties: {
operation: { type: 'string', enum: ['add', 'subtract', 'multiply', 'divide'] },
a: { type: 'number' },
b: { type: 'number' }
},
required: ['operation', 'a', 'b']
},
handler: async ({ operation, a, b }) => {
switch (operation) {
case 'add': return a + b;
case 'subtract': return a - b;
case 'multiply': return a * b;
case 'divide': return a / b;
default: throw new Error('Invalid operation');
}
}
});
// Validate tool call
const isValid = validateToolCall(toolCall, calculator);
// Execute tool call
const result = await executeToolCall(toolCall, [calculator]);Advanced Patterns
Multi-Turn Conversations
const graph = new ManagedCyclicGraph('conversation')
.node('chat', {
execute: [async (ctx) => {
const history = ctx.data.messages || [];
const newMessage = { role: 'user', content: ctx.data.userInput };
const response = await ctx.services.llm.chat([...history, newMessage]);
return {
...ctx,
data: {
...ctx.data,
messages: [
...history,
newMessage,
{ role: 'assistant', content: response.content }
]
}
};
}]
})
.build();Agentic Workflows
import { defineTool, executeToolCall } from '@quarry-systems/drift-ai-core';
const graph = new ManagedCyclicGraph('agent')
.node('think', {
label: 'Agent Reasoning',
execute: [async (ctx) => {
const response = await ctx.services.llm.chat(
ctx.data.messages,
{ tools: ctx.data.availableTools }
);
if (response.toolCalls) {
// Execute tools and continue
const results = await Promise.all(
response.toolCalls.map(call =>
executeToolCall(call, ctx.data.availableTools)
)
);
return {
...ctx,
data: {
...ctx.data,
toolResults: results,
shouldContinue: true
}
};
}
// Final answer
return {
...ctx,
data: {
...ctx.data,
finalAnswer: response.content,
shouldContinue: false
}
};
}]
})
.edge('think', 'think', (ctx) => ctx.data.shouldContinue === true)
.edge('think', 'end', (ctx) => ctx.data.shouldContinue === false)
.build();Structured Data Extraction
import { generateSchema, validateAgainstSchema } from '@quarry-systems/drift-ai-core';
const extractionSchema = generateSchema({
type: 'object',
properties: {
entities: {
type: 'array',
items: {
type: 'object',
properties: {
name: { type: 'string' },
type: { type: 'string' },
confidence: { type: 'number', minimum: 0, maximum: 1 }
}
}
},
sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] }
}
});
const graph = new ManagedCyclicGraph('extraction')
.node('extract', {
execute: [async (ctx) => {
const response = await ctx.services.llm.chat(
formatMessages([
{ role: 'system', content: 'Extract entities and sentiment from text.' },
{ role: 'user', content: ctx.data.text }
]),
{ responseFormat: { type: 'json_schema', schema: extractionSchema } }
);
const validation = validateAgainstSchema(response.content, extractionSchema);
return {
...ctx,
data: {
...ctx.data,
extracted: validation.valid ? validation.data : null,
error: validation.valid ? null : validation.error
}
};
}]
})
.build();API Reference
Message Helpers
formatMessages(messages)- Format message arrayaddSystemMessage(messages, content)- Add system messageaddUserMessage(messages, content)- Add user messageaddAssistantMessage(messages, content)- Add assistant message
Tool Helpers
defineTool(config)- Define a tool with schema and handlerexecuteToolCall(call, tools)- Execute a tool callvalidateToolCall(call, tool)- Validate tool call parameters
Schema Helpers
generateSchema(definition)- Generate JSON schemavalidateAgainstSchema(data, schema)- Validate data against schema
Response Helpers
parseJSONResponse(text)- Parse JSON from responseextractCodeBlocks(text)- Extract code blockshandleStreamingResponse(stream, handlers)- Handle streaming
Runtime Validation
isValidToolCall(call)- Check if tool call is validisValidMessage(message)- Check if message is validisValidSchema(schema)- Check if schema is valid
Integration with LLM Adapters
Drift AI Core works with any LLM adapter that implements the LLMAdapter interface from @quarry-systems/drift-contracts:
import type { LLMAdapter } from '@quarry-systems/drift-contracts';
// Your custom adapter
const myLLMAdapter: LLMAdapter = {
chat: async (messages, options) => {
// Implementation
return {
content: 'Response',
role: 'assistant',
finishReason: 'stop'
};
},
chatStream: async (messages, options) => {
// Streaming implementation
}
};
// Use with Drift
const manager = new Manager(graph, {
services: {
llm: { factory: () => myLLMAdapter }
}
});Related Packages
- @quarry-systems/drift-core - Core graph engine
- @quarry-systems/drift-contracts - Type definitions
- @quarry-systems/drift-openai - OpenAI adapter
Examples
See the examples directory for complete examples:
- Chat applications
- Tool calling agents
- Data extraction
- Multi-turn conversations
License
Dual-licensed under:
- AGPL-3.0 for open source projects
- Commercial License for proprietary use
See LICENSE.md for details.
For commercial licensing:
- Email: [email protected]
- Web: https://quarry-systems.com/license
