npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@quarry-systems/drift-ai-core

v0.3.0-alpha.3

Published

AI agent core functionality for Drift (Managed Cyclic Graph)

Readme

@quarry-systems/drift-ai-core

AI agent core functionality for Drift (Managed Cyclic Graph) - build intelligent workflows with LLM integration.

npm version License

Overview

Drift AI Core provides utilities and helpers for building AI-powered workflows using managed cyclic graphs. It includes LLM integration helpers, tool calling utilities, schema generation, and runtime type validation.

Installation

npm install @quarry-systems/drift-ai-core @quarry-systems/drift-core @quarry-systems/drift-contracts

Features

  • LLM Integration: Helpers for working with language models
  • Tool Calling: Define and execute tools for LLM function calling
  • Schema Generation: Convert TypeScript types to JSON schemas
  • Runtime Validation: Validate LLM outputs against schemas
  • Streaming Support: Handle streaming LLM responses
  • Message Formatting: Utilities for chat message construction
  • Response Parsing: Extract structured data from LLM responses
  • Error Handling: Robust error handling for AI operations

Quick Start

Basic LLM Integration

import { ManagedCyclicGraph } from '@quarry-systems/drift-core';
import { createLLMNode, formatMessages } from '@quarry-systems/drift-ai-core';

const graph = new ManagedCyclicGraph('ai-workflow')
  .node('chat', {
    label: 'Chat with LLM',
    execute: [async (ctx) => {
      const messages = formatMessages([
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: ctx.data.userQuery }
      ]);
      
      // Use your LLM adapter
      const response = await ctx.services.llm.chat(messages);
      
      return {
        ...ctx,
        data: {
          ...ctx.data,
          response: response.content
        }
      };
    }]
  })
  .build();

Tool Calling

import { defineTool, executeToolCall } from '@quarry-systems/drift-ai-core';

// Define tools
const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: {
    type: 'object',
    properties: {
      location: { type: 'string', description: 'City name' },
      units: { type: 'string', enum: ['celsius', 'fahrenheit'] }
    },
    required: ['location']
  },
  handler: async ({ location, units = 'celsius' }) => {
    // Fetch weather data
    return { temp: 22, units, location };
  }
});

// Use in graph
const graph = new ManagedCyclicGraph('tool-calling')
  .node('llm-with-tools', {
    execute: [async (ctx) => {
      const response = await ctx.services.llm.chat(
        formatMessages([{ role: 'user', content: 'What\'s the weather in Paris?' }]),
        { tools: [weatherTool] }
      );
      
      // Execute tool calls
      if (response.toolCalls) {
        const results = await Promise.all(
          response.toolCalls.map(call => executeToolCall(call, [weatherTool]))
        );
        return { ...ctx, data: { ...ctx.data, toolResults: results } };
      }
      
      return ctx;
    }]
  })
  .build();

Schema Generation

import { generateSchema, validateAgainstSchema } from '@quarry-systems/drift-ai-core';

// Define expected output structure
interface UserProfile {
  name: string;
  age: number;
  email: string;
  interests: string[];
}

// Generate JSON schema
const schema = generateSchema<UserProfile>({
  type: 'object',
  properties: {
    name: { type: 'string' },
    age: { type: 'number', minimum: 0 },
    email: { type: 'string', format: 'email' },
    interests: { type: 'array', items: { type: 'string' } }
  },
  required: ['name', 'age', 'email']
});

// Request structured output from LLM
const response = await llm.chat(messages, {
  responseFormat: { type: 'json_schema', schema }
});

// Validate response
const validation = validateAgainstSchema(response.content, schema);
if (validation.valid) {
  const profile: UserProfile = validation.data;
  // Use typed data
}

Core Utilities

Message Formatting

import { formatMessages, addSystemMessage, addUserMessage } from '@quarry-systems/drift-ai-core';

// Format message array
const messages = formatMessages([
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'Hello!' },
  { role: 'assistant', content: 'Hi! How can I help?' },
  { role: 'user', content: 'Tell me about graphs.' }
]);

// Helper functions
const withSystem = addSystemMessage(messages, 'Be concise.');
const withUser = addUserMessage(messages, 'What is 2+2?');

Response Parsing

import { parseJSONResponse, extractCodeBlocks } from '@quarry-systems/drift-ai-core';

// Parse JSON from LLM response
const data = parseJSONResponse(response.content);

// Extract code blocks
const codeBlocks = extractCodeBlocks(response.content);
// Returns: [{ language: 'typescript', code: '...' }, ...]

Streaming Helpers

import { handleStreamingResponse } from '@quarry-systems/drift-ai-core';

const stream = await llm.chatStream(messages);

await handleStreamingResponse(stream, {
  onToken: (token) => console.log(token),
  onComplete: (fullText) => console.log('Complete:', fullText),
  onError: (error) => console.error('Error:', error)
});

Tool Execution

import { defineTool, executeToolCall, validateToolCall } from '@quarry-systems/drift-ai-core';

// Define a tool
const calculator = defineTool({
  name: 'calculate',
  description: 'Perform basic math operations',
  parameters: {
    type: 'object',
    properties: {
      operation: { type: 'string', enum: ['add', 'subtract', 'multiply', 'divide'] },
      a: { type: 'number' },
      b: { type: 'number' }
    },
    required: ['operation', 'a', 'b']
  },
  handler: async ({ operation, a, b }) => {
    switch (operation) {
      case 'add': return a + b;
      case 'subtract': return a - b;
      case 'multiply': return a * b;
      case 'divide': return a / b;
      default: throw new Error('Invalid operation');
    }
  }
});

// Validate tool call
const isValid = validateToolCall(toolCall, calculator);

// Execute tool call
const result = await executeToolCall(toolCall, [calculator]);

Advanced Patterns

Multi-Turn Conversations

const graph = new ManagedCyclicGraph('conversation')
  .node('chat', {
    execute: [async (ctx) => {
      const history = ctx.data.messages || [];
      const newMessage = { role: 'user', content: ctx.data.userInput };
      
      const response = await ctx.services.llm.chat([...history, newMessage]);
      
      return {
        ...ctx,
        data: {
          ...ctx.data,
          messages: [
            ...history,
            newMessage,
            { role: 'assistant', content: response.content }
          ]
        }
      };
    }]
  })
  .build();

Agentic Workflows

import { defineTool, executeToolCall } from '@quarry-systems/drift-ai-core';

const graph = new ManagedCyclicGraph('agent')
  .node('think', {
    label: 'Agent Reasoning',
    execute: [async (ctx) => {
      const response = await ctx.services.llm.chat(
        ctx.data.messages,
        { tools: ctx.data.availableTools }
      );
      
      if (response.toolCalls) {
        // Execute tools and continue
        const results = await Promise.all(
          response.toolCalls.map(call => 
            executeToolCall(call, ctx.data.availableTools)
          )
        );
        
        return {
          ...ctx,
          data: {
            ...ctx.data,
            toolResults: results,
            shouldContinue: true
          }
        };
      }
      
      // Final answer
      return {
        ...ctx,
        data: {
          ...ctx.data,
          finalAnswer: response.content,
          shouldContinue: false
        }
      };
    }]
  })
  .edge('think', 'think', (ctx) => ctx.data.shouldContinue === true)
  .edge('think', 'end', (ctx) => ctx.data.shouldContinue === false)
  .build();

Structured Data Extraction

import { generateSchema, validateAgainstSchema } from '@quarry-systems/drift-ai-core';

const extractionSchema = generateSchema({
  type: 'object',
  properties: {
    entities: {
      type: 'array',
      items: {
        type: 'object',
        properties: {
          name: { type: 'string' },
          type: { type: 'string' },
          confidence: { type: 'number', minimum: 0, maximum: 1 }
        }
      }
    },
    sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] }
  }
});

const graph = new ManagedCyclicGraph('extraction')
  .node('extract', {
    execute: [async (ctx) => {
      const response = await ctx.services.llm.chat(
        formatMessages([
          { role: 'system', content: 'Extract entities and sentiment from text.' },
          { role: 'user', content: ctx.data.text }
        ]),
        { responseFormat: { type: 'json_schema', schema: extractionSchema } }
      );
      
      const validation = validateAgainstSchema(response.content, extractionSchema);
      
      return {
        ...ctx,
        data: {
          ...ctx.data,
          extracted: validation.valid ? validation.data : null,
          error: validation.valid ? null : validation.error
        }
      };
    }]
  })
  .build();

API Reference

Message Helpers

  • formatMessages(messages) - Format message array
  • addSystemMessage(messages, content) - Add system message
  • addUserMessage(messages, content) - Add user message
  • addAssistantMessage(messages, content) - Add assistant message

Tool Helpers

  • defineTool(config) - Define a tool with schema and handler
  • executeToolCall(call, tools) - Execute a tool call
  • validateToolCall(call, tool) - Validate tool call parameters

Schema Helpers

  • generateSchema(definition) - Generate JSON schema
  • validateAgainstSchema(data, schema) - Validate data against schema

Response Helpers

  • parseJSONResponse(text) - Parse JSON from response
  • extractCodeBlocks(text) - Extract code blocks
  • handleStreamingResponse(stream, handlers) - Handle streaming

Runtime Validation

  • isValidToolCall(call) - Check if tool call is valid
  • isValidMessage(message) - Check if message is valid
  • isValidSchema(schema) - Check if schema is valid

Integration with LLM Adapters

Drift AI Core works with any LLM adapter that implements the LLMAdapter interface from @quarry-systems/drift-contracts:

import type { LLMAdapter } from '@quarry-systems/drift-contracts';

// Your custom adapter
const myLLMAdapter: LLMAdapter = {
  chat: async (messages, options) => {
    // Implementation
    return {
      content: 'Response',
      role: 'assistant',
      finishReason: 'stop'
    };
  },
  
  chatStream: async (messages, options) => {
    // Streaming implementation
  }
};

// Use with Drift
const manager = new Manager(graph, {
  services: {
    llm: { factory: () => myLLMAdapter }
  }
});

Related Packages

Examples

See the examples directory for complete examples:

  • Chat applications
  • Tool calling agents
  • Data extraction
  • Multi-turn conversations

License

Dual-licensed under:

  • AGPL-3.0 for open source projects
  • Commercial License for proprietary use

See LICENSE.md for details.

For commercial licensing:

Support