npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@quarry-systems/drift-openai

v0.1.1-alpha.1

Published

OpenAI provider plugin for Drift AI

Downloads

26

Readme

MCG OpenAI Plugin

OpenAI provider plugin for Managed Cyclic Graph (MCG). Enables LLM-powered nodes using GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo, and OpenAI embeddings.

Features

  • Chat Completions: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo
  • Embeddings: text-embedding-3-small, text-embedding-3-large
  • Structured Output: JSON mode, JSON schema, function calling
  • Template Variables: Dynamic prompts from context (${data.field})
  • Retry Logic: Configurable retries with exponential backoff
  • Token Tracking: Usage metadata (prompt/completion tokens)
  • Response Transformation: Extract/transform LLM responses
  • Azure OpenAI: Custom base URL support

Installation

npm install @quarry-systems/mcg-openai

Quick Start

Plugin-Based Approach

import { ManagedCyclicGraph } from '@quarry-systems/managed-cyclic-graph';
import { mcgOpenAIPlugin, gpt4o } from '@quarry-systems/mcg-openai';

const graph = new ManagedCyclicGraph()
  .use(mcgOpenAIPlugin)
  
  .node('analyze', {
    type: 'llmnode',
    meta: {
      llm: gpt4o({
        systemPrompt: 'You are a helpful assistant.',
        userPromptPath: 'data.userInput'
      })
    }
  })
  
  .node('complete', { isEndpoint: true })
  
  .edge('analyze', 'complete', 'any')
  .start('analyze')
  .build();

Action-Based Approach

import { ManagedCyclicGraph } from '@quarry-systems/managed-cyclic-graph';
import { createLLMAction, gpt4oMini } from '@quarry-systems/mcg-openai';

const graph = new ManagedCyclicGraph()
  .node('processWithAI', {
    execute: [
      createLLMAction('processWithAI', gpt4oMini({
        systemPrompt: 'Analyze the following data.',
        userPromptPath: 'data.input'
      }))
    ]
  })
  .build();

API Reference

Model Helpers

// GPT-4o (latest, most capable)
gpt4o({ systemPrompt: '...', userPrompt: '...' })

// GPT-4o-mini (fast, cost-effective)
gpt4oMini({ systemPrompt: '...', userPrompt: '...' })

// GPT-4-turbo
gpt4Turbo({ systemPrompt: '...', userPrompt: '...' })

// GPT-3.5-turbo
gpt35Turbo({ systemPrompt: '...', userPrompt: '...' })

// Custom model
openaiChat('gpt-4o-2024-08-06', { ... })

Embedding Helpers

// text-embedding-3-small (default, fast)
embeddingSmall({ dimensions: 512 })

// text-embedding-3-large (more accurate)
embeddingLarge({ dimensions: 1024 })

Configuration Options

gpt4o({
  // Prompts
  systemPrompt: 'You are a helpful assistant.',
  userPrompt: 'Hello!',
  userPromptPath: 'data.userInput',  // OR pull from context
  messages: [{ role: 'user', content: 'Hi' }],
  
  // Generation parameters
  temperature: 0.7,
  maxTokens: 1000,
  topP: 1,
  frequencyPenalty: 0,
  presencePenalty: 0,
  stop: ['\n\n'],
  seed: 42,
  
  // Structured output
  responseFormat: 'json_object',
  // OR with schema:
  // responseFormat: { type: 'json_schema', json_schema: { name: 'Response', schema: {...} } }
  
  // Tools/Functions
  tools: [
    {
      type: 'function',
      function: {
        name: 'get_weather',
        description: 'Get current weather',
        parameters: {
          type: 'object',
          properties: {
            location: { type: 'string' }
          },
          required: ['location']
        }
      }
    }
  ],
  toolChoice: 'auto',
  
  // Retry/Timeout
  retries: 3,
  retryDelayMs: 1000,
  timeoutMs: 30000,
  
  // Storage
  responseStorePath: 'data.customPath.response',
  
  // API configuration
  apiKey: 'sk-...',
  apiKeyPath: 'global.openaiKey',  // Read from context
  baseUrl: 'https://my-azure-endpoint.openai.azure.com',
  
  // Callbacks
  onComplete: (response, ctx) => console.log('Done:', response.content)
})

Configuration Modifiers

import { 
  gpt4o, 
  withJsonSchema, 
  withRetry, 
  withTools, 
  tool,
  creative,
  precise 
} from '@quarry-systems/mcg-openai';

// Structured JSON output
withJsonSchema(
  gpt4o({ userPrompt: 'Extract user info' }),
  'UserInfo',
  {
    type: 'object',
    properties: {
      name: { type: 'string' },
      age: { type: 'number' }
    },
    required: ['name', 'age']
  }
)

// Function calling
withTools(
  gpt4o({ userPrompt: 'What is the weather in NYC?' }),
  [tool('get_weather', 'Get weather for a location', {
    type: 'object',
    properties: { location: { type: 'string' } },
    required: ['location']
  })]
)

// Temperature presets
creative(gpt4o({ ... }))  // temperature: 1.0
precise(gpt4o({ ... }))   // temperature: 0.2

// Retry configuration
withRetry(gpt4o({ ... }), 5, 2000)

Examples

AI-Driven Branching

const graph = new ManagedCyclicGraph()
  .use(mcgOpenAIPlugin)
  
  .guard('isPositive', ctx => 
    ctx.data.llm?.classify?.response?.parsed?.sentiment === 'positive'
  )
  .guard('isNegative', ctx => 
    ctx.data.llm?.classify?.response?.parsed?.sentiment === 'negative'
  )
  
  .node('classify', {
    type: 'llmnode',
    meta: {
      llm: withJsonSchema(
        gpt4oMini({ userPromptPath: 'data.feedback' }),
        'Sentiment',
        {
          type: 'object',
          properties: { sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] } },
          required: ['sentiment']
        }
      )
    }
  })
  
  .node('handlePositive', { execute: [/* thank user */] })
  .node('handleNegative', { execute: [/* escalate */] })
  .node('handleNeutral', { execute: [/* default */] })
  
  .branch('classify', { when: 'isPositive', then: 'handlePositive' })
  .branch('classify', { when: 'isNegative', then: 'handleNegative' })
  .edge('classify', 'handleNeutral', 'any')
  
  .build();

Iterative Refinement (MCG + LLM)

const graph = new ManagedCyclicGraph()
  .use(mcgOpenAIPlugin)
  
  .guard('needsRefinement', ctx => {
    const quality = ctx.data.llm?.evaluate?.response?.parsed?.quality;
    return quality < 8 && ctx.data.iterations < 3;
  })
  .guard('isGoodEnough', ctx => {
    const quality = ctx.data.llm?.evaluate?.response?.parsed?.quality;
    return quality >= 8;
  })
  
  .node('generate', {
    type: 'llmnode',
    meta: {
      llm: gpt4o({
        systemPrompt: 'Generate a creative story based on the prompt.',
        userPromptPath: 'data.prompt'
      })
    }
  })
  
  .node('evaluate', {
    type: 'llmnode',
    meta: {
      llm: withJsonSchema(
        gpt4oMini({ 
          systemPrompt: 'Rate the story quality 1-10.',
          userPromptPath: 'data.llm.generate.response.content'
        }),
        'Evaluation',
        { type: 'object', properties: { quality: { type: 'number' } }, required: ['quality'] }
      )
    }
  })
  
  .node('refine', {
    type: 'llmnode',
    execute: [
      ctx => { ctx.data.iterations = (ctx.data.iterations || 0) + 1; return ctx; }
    ],
    meta: {
      llm: gpt4o({
        systemPrompt: 'Improve this story based on the feedback.',
        messagesPath: 'data.refinementMessages'
      })
    }
  })
  
  .node('complete', { isEndpoint: true })
  
  .edge('generate', 'evaluate', 'any')
  .branch('evaluate', { when: 'needsRefinement', then: 'refine' })
  .branch('evaluate', { when: 'isGoodEnough', then: 'complete' })
  .edge('refine', 'evaluate', 'any')
  
  .build();

Multi-Agent Workflow

const graph = new ManagedCyclicGraph()
  .use(mcgOpenAIPlugin)
  
  .node('researcher', {
    type: 'llmnode',
    meta: {
      llm: gpt4o({
        systemPrompt: 'You are a research analyst. Gather key facts about the topic.',
        userPromptPath: 'data.topic'
      })
    }
  })
  
  .node('writer', {
    type: 'llmnode',
    meta: {
      llm: gpt4o({
        systemPrompt: 'You are a content writer. Write an article based on the research.',
        userPromptPath: 'data.llm.researcher.response.content'
      })
    }
  })
  
  .node('editor', {
    type: 'llmnode',
    meta: {
      llm: gpt4oMini({
        systemPrompt: 'You are an editor. Polish the article for clarity and grammar.',
        userPromptPath: 'data.llm.writer.response.content'
      })
    }
  })
  
  .edge('researcher', 'writer', 'any')
  .edge('writer', 'editor', 'any')
  
  .build();

Response Storage

Responses are stored at data.llm.{nodeId}:

{
  data: {
    llm: {
      analyze: {
        request: {
          provider: 'openai',
          model: 'gpt-4o',
          messages: [...],
          runId: 'abc123',
          step: 1,
          ts: 1703123456789
        },
        response: {
          content: "Based on the analysis...",
          role: "assistant",
          finishReason: "stop",
          toolCalls: [...],
          parsed: { ... }  // If JSON response
        },
        meta: {
          provider: "openai",
          model: "gpt-4o",
          usage: {
            promptTokens: 150,
            completionTokens: 89,
            totalTokens: 239
          },
          duration: 1234,
          success: true,
          attempts: 1,
          requestId: "chatcmpl-abc123"
        }
      }
    }
  }
}

Environment Variables

# Set your OpenAI API key
export OPENAI_API_KEY=sk-...

Or provide via configuration:

gpt4o({
  apiKey: 'sk-...',
  // OR read from context
  apiKeyPath: 'global.openaiKey'
})

Azure OpenAI

gpt4o({
  baseUrl: 'https://YOUR-RESOURCE.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT',
  apiKey: 'your-azure-key',
  model: 'gpt-4o'  // Your deployment name
})

Error Handling

Errors are stored at data.llm.{nodeId}.error:

{
  message: "OpenAI API error: Rate limit exceeded",
  code: "LLM_REQUEST_FAILED",
  provider: "openai",
  retryable: true
}

The plugin automatically retries on:

  • Rate limits (429)
  • Server errors (500, 502, 503)
  • Timeouts

Best Practices

  1. Use gpt4oMini for simple tasks - Faster and cheaper
  2. Use gpt4o for complex reasoning - More capable
  3. Set appropriate timeouts - Default is 60s
  4. Use structured output - withJsonSchema for reliable parsing
  5. Secure API keys - Use apiKeyPath or environment variables
  6. Add validation - Use withValidation for critical workflows
  7. Monitor token usage - Check meta.usage for cost tracking

License

ISC