@quarry-systems/drift-openai
v0.1.1-alpha.1
Published
OpenAI provider plugin for Drift AI
Downloads
26
Maintainers
Readme
MCG OpenAI Plugin
OpenAI provider plugin for Managed Cyclic Graph (MCG). Enables LLM-powered nodes using GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo, and OpenAI embeddings.
Features
- ✅ Chat Completions: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo
- ✅ Embeddings: text-embedding-3-small, text-embedding-3-large
- ✅ Structured Output: JSON mode, JSON schema, function calling
- ✅ Template Variables: Dynamic prompts from context (
${data.field}) - ✅ Retry Logic: Configurable retries with exponential backoff
- ✅ Token Tracking: Usage metadata (prompt/completion tokens)
- ✅ Response Transformation: Extract/transform LLM responses
- ✅ Azure OpenAI: Custom base URL support
Installation
npm install @quarry-systems/mcg-openaiQuick Start
Plugin-Based Approach
import { ManagedCyclicGraph } from '@quarry-systems/managed-cyclic-graph';
import { mcgOpenAIPlugin, gpt4o } from '@quarry-systems/mcg-openai';
const graph = new ManagedCyclicGraph()
.use(mcgOpenAIPlugin)
.node('analyze', {
type: 'llmnode',
meta: {
llm: gpt4o({
systemPrompt: 'You are a helpful assistant.',
userPromptPath: 'data.userInput'
})
}
})
.node('complete', { isEndpoint: true })
.edge('analyze', 'complete', 'any')
.start('analyze')
.build();Action-Based Approach
import { ManagedCyclicGraph } from '@quarry-systems/managed-cyclic-graph';
import { createLLMAction, gpt4oMini } from '@quarry-systems/mcg-openai';
const graph = new ManagedCyclicGraph()
.node('processWithAI', {
execute: [
createLLMAction('processWithAI', gpt4oMini({
systemPrompt: 'Analyze the following data.',
userPromptPath: 'data.input'
}))
]
})
.build();API Reference
Model Helpers
// GPT-4o (latest, most capable)
gpt4o({ systemPrompt: '...', userPrompt: '...' })
// GPT-4o-mini (fast, cost-effective)
gpt4oMini({ systemPrompt: '...', userPrompt: '...' })
// GPT-4-turbo
gpt4Turbo({ systemPrompt: '...', userPrompt: '...' })
// GPT-3.5-turbo
gpt35Turbo({ systemPrompt: '...', userPrompt: '...' })
// Custom model
openaiChat('gpt-4o-2024-08-06', { ... })Embedding Helpers
// text-embedding-3-small (default, fast)
embeddingSmall({ dimensions: 512 })
// text-embedding-3-large (more accurate)
embeddingLarge({ dimensions: 1024 })Configuration Options
gpt4o({
// Prompts
systemPrompt: 'You are a helpful assistant.',
userPrompt: 'Hello!',
userPromptPath: 'data.userInput', // OR pull from context
messages: [{ role: 'user', content: 'Hi' }],
// Generation parameters
temperature: 0.7,
maxTokens: 1000,
topP: 1,
frequencyPenalty: 0,
presencePenalty: 0,
stop: ['\n\n'],
seed: 42,
// Structured output
responseFormat: 'json_object',
// OR with schema:
// responseFormat: { type: 'json_schema', json_schema: { name: 'Response', schema: {...} } }
// Tools/Functions
tools: [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather',
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
}
}
],
toolChoice: 'auto',
// Retry/Timeout
retries: 3,
retryDelayMs: 1000,
timeoutMs: 30000,
// Storage
responseStorePath: 'data.customPath.response',
// API configuration
apiKey: 'sk-...',
apiKeyPath: 'global.openaiKey', // Read from context
baseUrl: 'https://my-azure-endpoint.openai.azure.com',
// Callbacks
onComplete: (response, ctx) => console.log('Done:', response.content)
})Configuration Modifiers
import {
gpt4o,
withJsonSchema,
withRetry,
withTools,
tool,
creative,
precise
} from '@quarry-systems/mcg-openai';
// Structured JSON output
withJsonSchema(
gpt4o({ userPrompt: 'Extract user info' }),
'UserInfo',
{
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' }
},
required: ['name', 'age']
}
)
// Function calling
withTools(
gpt4o({ userPrompt: 'What is the weather in NYC?' }),
[tool('get_weather', 'Get weather for a location', {
type: 'object',
properties: { location: { type: 'string' } },
required: ['location']
})]
)
// Temperature presets
creative(gpt4o({ ... })) // temperature: 1.0
precise(gpt4o({ ... })) // temperature: 0.2
// Retry configuration
withRetry(gpt4o({ ... }), 5, 2000)Examples
AI-Driven Branching
const graph = new ManagedCyclicGraph()
.use(mcgOpenAIPlugin)
.guard('isPositive', ctx =>
ctx.data.llm?.classify?.response?.parsed?.sentiment === 'positive'
)
.guard('isNegative', ctx =>
ctx.data.llm?.classify?.response?.parsed?.sentiment === 'negative'
)
.node('classify', {
type: 'llmnode',
meta: {
llm: withJsonSchema(
gpt4oMini({ userPromptPath: 'data.feedback' }),
'Sentiment',
{
type: 'object',
properties: { sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] } },
required: ['sentiment']
}
)
}
})
.node('handlePositive', { execute: [/* thank user */] })
.node('handleNegative', { execute: [/* escalate */] })
.node('handleNeutral', { execute: [/* default */] })
.branch('classify', { when: 'isPositive', then: 'handlePositive' })
.branch('classify', { when: 'isNegative', then: 'handleNegative' })
.edge('classify', 'handleNeutral', 'any')
.build();Iterative Refinement (MCG + LLM)
const graph = new ManagedCyclicGraph()
.use(mcgOpenAIPlugin)
.guard('needsRefinement', ctx => {
const quality = ctx.data.llm?.evaluate?.response?.parsed?.quality;
return quality < 8 && ctx.data.iterations < 3;
})
.guard('isGoodEnough', ctx => {
const quality = ctx.data.llm?.evaluate?.response?.parsed?.quality;
return quality >= 8;
})
.node('generate', {
type: 'llmnode',
meta: {
llm: gpt4o({
systemPrompt: 'Generate a creative story based on the prompt.',
userPromptPath: 'data.prompt'
})
}
})
.node('evaluate', {
type: 'llmnode',
meta: {
llm: withJsonSchema(
gpt4oMini({
systemPrompt: 'Rate the story quality 1-10.',
userPromptPath: 'data.llm.generate.response.content'
}),
'Evaluation',
{ type: 'object', properties: { quality: { type: 'number' } }, required: ['quality'] }
)
}
})
.node('refine', {
type: 'llmnode',
execute: [
ctx => { ctx.data.iterations = (ctx.data.iterations || 0) + 1; return ctx; }
],
meta: {
llm: gpt4o({
systemPrompt: 'Improve this story based on the feedback.',
messagesPath: 'data.refinementMessages'
})
}
})
.node('complete', { isEndpoint: true })
.edge('generate', 'evaluate', 'any')
.branch('evaluate', { when: 'needsRefinement', then: 'refine' })
.branch('evaluate', { when: 'isGoodEnough', then: 'complete' })
.edge('refine', 'evaluate', 'any')
.build();Multi-Agent Workflow
const graph = new ManagedCyclicGraph()
.use(mcgOpenAIPlugin)
.node('researcher', {
type: 'llmnode',
meta: {
llm: gpt4o({
systemPrompt: 'You are a research analyst. Gather key facts about the topic.',
userPromptPath: 'data.topic'
})
}
})
.node('writer', {
type: 'llmnode',
meta: {
llm: gpt4o({
systemPrompt: 'You are a content writer. Write an article based on the research.',
userPromptPath: 'data.llm.researcher.response.content'
})
}
})
.node('editor', {
type: 'llmnode',
meta: {
llm: gpt4oMini({
systemPrompt: 'You are an editor. Polish the article for clarity and grammar.',
userPromptPath: 'data.llm.writer.response.content'
})
}
})
.edge('researcher', 'writer', 'any')
.edge('writer', 'editor', 'any')
.build();Response Storage
Responses are stored at data.llm.{nodeId}:
{
data: {
llm: {
analyze: {
request: {
provider: 'openai',
model: 'gpt-4o',
messages: [...],
runId: 'abc123',
step: 1,
ts: 1703123456789
},
response: {
content: "Based on the analysis...",
role: "assistant",
finishReason: "stop",
toolCalls: [...],
parsed: { ... } // If JSON response
},
meta: {
provider: "openai",
model: "gpt-4o",
usage: {
promptTokens: 150,
completionTokens: 89,
totalTokens: 239
},
duration: 1234,
success: true,
attempts: 1,
requestId: "chatcmpl-abc123"
}
}
}
}
}Environment Variables
# Set your OpenAI API key
export OPENAI_API_KEY=sk-...Or provide via configuration:
gpt4o({
apiKey: 'sk-...',
// OR read from context
apiKeyPath: 'global.openaiKey'
})Azure OpenAI
gpt4o({
baseUrl: 'https://YOUR-RESOURCE.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT',
apiKey: 'your-azure-key',
model: 'gpt-4o' // Your deployment name
})Error Handling
Errors are stored at data.llm.{nodeId}.error:
{
message: "OpenAI API error: Rate limit exceeded",
code: "LLM_REQUEST_FAILED",
provider: "openai",
retryable: true
}The plugin automatically retries on:
- Rate limits (429)
- Server errors (500, 502, 503)
- Timeouts
Best Practices
- Use
gpt4oMinifor simple tasks - Faster and cheaper - Use
gpt4ofor complex reasoning - More capable - Set appropriate timeouts - Default is 60s
- Use structured output -
withJsonSchemafor reliable parsing - Secure API keys - Use
apiKeyPathor environment variables - Add validation - Use
withValidationfor critical workflows - Monitor token usage - Check
meta.usagefor cost tracking
License
ISC
