npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@webidoo-eng/webidoo-ai-core

v1.1.0

Published

AI core library with OpenAI wrapper and Redis vector store for inference and RAG applications

Readme

webidoo-ai-core

A comprehensive TypeScript library for AI applications, providing seamless integration with OpenAI APIs and Redis-powered vector storage for advanced inference and RAG (Retrieval-Augmented Generation) workflows.

Features

  • InferenceModel: a wrapper on top of OpenAI API with support for tool calls and streaming.
  • VectorStore: an interface for creating, populating, and querying a Redis vector index.
  • ConfigService: a service for managing configuration parameters.

Installation

npm install @webidoo-eng/webidoo-ai-core

Quick Start

import { InferenceModel, VectorStore, ConfigService } from '@webidoo-eng/webidoo-ai-core';

// Initialize with configuration
const config = new ConfigService({
  openai: { apiKey: 'your-api-key' },
  redis: { url: 'redis://localhost:6379' }
});

// Create AI model
const model = new InferenceModel(config);

// Create vector store (async factory function)
const vectorStore = await VectorStore({
  indexName: 'my_index',
  prefix: 'v:',
  configService: config,
});

Requirements

  • Node.js
  • Redis Stack with RediSearch support

Environment Variables (Optional)

  • OPENAI_API_KEY - Your OpenAI API key
  • OPENAI_ORG_ID - OpenAI organization ID (optional)
  • OPENAI_BASE_URL - Custom OpenAI API endpoint (optional)
  • REDIS_URL - Redis connection URL (default: redis://localhost:6379)

ConfigService

The main class for managing configuration.

Constructor

new ConfigService(config?: Partial<WebidooConfig>)
  • config: optional partial configuration that overrides default values

Methods

  • getConfig(): returns the full configuration
  • getOpenAIConfig(): returns only the OpenAI configuration
  • getRedisConfig(): returns only the Redis configuration
  • updateConfig(config: Partial<WebidooConfig>): updates the configuration
  • validate(): checks that required parameters are present

InferenceModel

Class for managing interactions with the OpenAI model.

Available Methods

stream({ model, messages, temperature })

Executes a streaming completion.

Parameters:

  • model: model name (e.g. gpt-4-0613)
  • messages: array of messages in TMessageInput format
  • temperature: optional

Returns:

  • ReadableStream of the streaming response

invoke({ model, messages, tools, temperature, forceTool })

Executes a synchronous completion with tool call support.

Parameters:

  • model: model name
  • messages: array of messages
  • tools: array of tools with handler
  • forceTool: if true, forces tool usage
  • temperature: optional

Returns:

  • Array of TMessage (assistant responses + tool responses)

VectorStore

Async factory function that initializes a Redis vector index.

Parameters

  • indexName: index name
  • prefix: Redis hash key prefix
  • vectorDim: vector dimension (optional, falls back to config)
  • tags: optional array of tag fields (used as filters)
  • configService: optional ConfigService instance

Returned Methods

insert({ id, vector, metadata })

Inserts a vector with metadata.

Parameters:

  • id: unique key
  • vector: number[] array of size vectorDim
  • metadata: optional, Record<string, string>

query({ vector, k, filter })

Performs a vector query with optional filtering.

Parameters:

  • vector: number[] array
  • k: number of results to return (default 5)
  • filter: optional tag filters

Returns:

  • Results from client.ft.search

Usage Examples

Default configuration

// Uses environment variables for configuration
const configService = new ConfigService();

// Create InferenceModel with the configuration
const model = new InferenceModel(configService);
const response = await model.invoke({
  model: 'gpt-4',
  messages: [...],
});

// Create VectorStore with the same configuration
const store = await VectorStore({
  indexName: 'my_index',
  prefix: 'v:',
  configService,
  tags: ['type'],
});

await store.insert({
  id: 'item1',
  vector: [...],
  metadata: { type: 'doc' },
});

const result = await store.query({ vector: [...], k: 3 });

Custom configuration

const configService = new ConfigService({
  openai: {
    apiKey: 'your-api-key',
    baseURL: 'https://custom-openai-endpoint.com',
  },
  redis: {
    url: 'redis://custom-redis:6379',
    vectorDim: 768,
  }
});

const model = new InferenceModel(configService);
const store = await VectorStore({
  indexName: 'custom_index',
  prefix: 'custom:',
  configService,
});

Dynamic configuration update

const configService = new ConfigService();

configService.updateConfig({
  openai: {
    baseURL: 'https://updated-endpoint.com',
  }
});

const model = new InferenceModel(configService);

Full Example: InferenceModel with Tools

import { ConfigService, InferenceModel } from '@webidoo-eng/webidoo-ai-core';

const configService = new ConfigService();
const model = new InferenceModel(configService);

const tools = [
  {
    type: 'function',
    function: {
      name: 'get_time',
      description: 'Returns the current time in ISO format',
      parameters: {
        type: 'object',
        properties: {},
      },
    },
    handler: async ({ name, args }) => {
      return new Date().toISOString();
    },
  },
];

const messages = [
  {
    role: 'user',
    content: [{ type: 'text', text: 'What time is it?' }],
  },
];

const response = await model.invoke({
  model: 'gpt-4-1106-preview',
  messages,
  tools,
  forceTool: false,
});

console.log(response);

RAG Example: Retrieval as a Tool

In this example, retrieve_context is registered as a tool and queries a Redis vector store. The LLM invokes the tool with an embedding vector and receives relevant documents in return.

Store setup with ConfigService

const configService = new ConfigService({
  redis: {
    vectorDim: 1536
  }
});

const store = await VectorStore({
  indexName: 'rag_index',
  prefix: 'doc:',
  configService,
  tags: ['source'],
});

Retrieval tool

const ragTool = {
  type: 'function',
  function: {
    name: 'retrieve_context',
    description: 'Retrieves the most relevant documents from the knowledge base',
    parameters: {
      type: 'object',
      properties: {
        query_vector: {
          type: 'array',
          items: { type: 'number' },
        },
        k: { type: 'integer' },
      },
      required: ['query_vector'],
    },
  },
  handler: async ({ args }) => {
    const { query_vector, k = 3 } = args as {
      query_vector: number[];
      k?: number;
    };
    const res = await store.query({ vector: query_vector, k });
    return JSON.stringify(res.documents ?? []);
  },
};

Running with InferenceModel

const model = new InferenceModel(configService);

const messages = [
  {
    role: 'user',
    content: [{ type: 'text', text: 'What does the documentation say about configuring authentication?' }],
  },
];

const embedding = [...]; // float array from an external embedding model

const response = await model.invoke({
  model: 'gpt-4-1106-preview',
  messages,
  tools: [ragTool],
  forceTool: true,
});

What happens

The LLM:

  1. Sends query_vector to the retrieve_context tool
  2. Receives relevant documents from the Redis vector store
  3. Generates the final response using the retrieved content