npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@yamakasinge/openai

v0.1.27

Published

OpenAI integration for WeWeb backend services

Readme

WeWeb OpenAI Integration

A WeWeb backend integration for OpenAI's API, providing access to AI models for chat completions, embeddings, content moderation, and image generation within WeWeb backend workflows.

Features

  • Simple integration with OpenAI API
  • Support for Chat Completions (GPT-3.5/GPT-4)
  • Text embeddings generation
  • Content moderation
  • Image generation with DALL-E

Installation

This package is designed to work with the WeWeb Supabase Backend Builder and Deno.

import { serve } from '@yamakasinge/backend-core';
import { createOpenAIIntegration } from '@yamakasinge/openai';

Usage

Basic Setup

import type { BackendConfig } from '@yamakasinge/backend-core';
import { serve } from '@yamakasinge/backend-core';
import OpenAI from '@yamakasinge/openai';

// Define your backend configuration
const config: BackendConfig = {
  workflows: [
    // Your workflows here
  ],
  integrations: [
    // Use the default OpenAI integration
    OpenAI,
    // Or add other integrations
  ],
  production: false,
};

// Start the server
const server = serve(config);

Custom Configuration

You can customize the OpenAI client by using the createOpenAIIntegration function:

import { createOpenAIIntegration } from '@yamakasinge/openai';

// Create a custom OpenAI integration
const customOpenAI = createOpenAIIntegration({
  apiKey: 'your-api-key', // Override environment variable
  organization: 'your-org-id', // Optional
  baseURL: 'https://your-custom-endpoint.com', // Optional
});

If not specified, the integration will use the following environment variables:

  • OPENAI_API_KEY - Your OpenAI API key
  • OPENAI_ORGANIZATION - Your OpenAI organization ID (optional)
  • OPENAI_BASE_URL - Custom API endpoint URL (optional)

Available Methods

Chat Completions

Generate responses from OpenAI's GPT models.

// Example workflow action
const config = {
  type: 'action',
  id: 'generate_response',
  actionId: 'openai.create_chat_completion',
  inputMapping: [
    {
      model: 'gpt-4',
      messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: '$body.question' }
      ],
      temperature: 0.7,
      max_tokens: 500
    }
  ]
};

Embeddings

Generate vector embeddings from text for semantic search and similarity.

// Example workflow action
const config = {
  type: 'action',
  id: 'create_embedding',
  actionId: 'openai.create_embeddings',
  inputMapping: [
    {
      model: 'text-embedding-ada-002',
      input: '$body.text'
    }
  ]
};

Content Moderation

Check content for potentially harmful or sensitive material.

// Example workflow action
const config = {
  type: 'action',
  id: 'moderate_content',
  actionId: 'openai.create_moderation',
  inputMapping: [
    {
      input: '$body.text',
      model: 'text-moderation-latest'
    }
  ]
};

Image Generation

Generate images from text descriptions using DALL-E.

// Example workflow action
const config = {
  type: 'action',
  id: 'generate_image',
  actionId: 'openai.generate_image',
  inputMapping: [
    {
      prompt: '$body.description',
      model: 'dall-e-3',
      size: '1024x1024',
      quality: 'standard',
      style: 'vivid'
    }
  ]
};

Input and Output Schema

The OpenAI integration includes a detailed schema that defines all input parameters and output structures for each method. This schema is used for validation and documentation.

Chat Completion Inputs

  • messages: Array of message objects with role and content
  • model: ID of the model to use (e.g., gpt-4, gpt-3.5-turbo)
  • temperature: Controls randomness (0-2)
  • max_tokens: Maximum tokens to generate
  • top_p: Alternative to temperature for nucleus sampling
  • frequency_penalty: Decreases likelihood of repeating tokens
  • presence_penalty: Increases likelihood of new topics
  • stream: Whether to stream the response
  • stop: Sequences where the API will stop generating

Embedding Inputs

  • input: Text to embed (string or array of strings)
  • model: Model to use (e.g., text-embedding-ada-002)
  • encoding_format: Format for embeddings (float or base64)
  • dimensions: Number of dimensions for embeddings

Moderation Inputs

  • input: Text to moderate (string or array of strings)
  • model: Moderation model to use

Image Generation Inputs

  • prompt: Text description of the desired image
  • model: Model to use (e.g., dall-e-3)
  • n: Number of images to generate
  • size: Size of the generated images
  • quality: Quality of the generated images
  • style: Style of the generated images
  • response_format: Format to return the images (url or b64_json)

Development

Testing

deno test

Code Quality

Format and lint your code:

deno fmt
deno lint

Example: Complete Chat Application

import type { BackendConfig } from '@yamakasinge/backend-core';
import { serve } from '@yamakasinge/backend-core';
import OpenAI from '@yamakasinge/openai';

const config: BackendConfig = {
  workflows: [
    {
      path: '/chat',
      methods: ['POST'],
      security: {
        accessRule: 'public',
      },
      inputsValidation: {
        body: {
          type: 'object',
          properties: {
            messages: {
              type: 'array',
              items: {
                type: 'object',
                properties: {
                  role: { type: 'string' },
                  content: { type: 'string' },
                },
                required: ['role', 'content'],
              },
            },
          },
          required: ['messages'],
        },
      },
      workflow: [
        {
          type: 'action',
          id: 'chat_completion',
          actionId: 'openai.create_chat_completion',
          inputMapping: [
            {
              messages: '$body.messages',
              model: 'gpt-3.5-turbo',
              temperature: 0.7,
              max_tokens: 1000,
            },
          ],
        },
      ],
    },
  ],
  integrations: [OpenAI],
  production: false,
};

console.log('Starting OpenAI chat server on http://localhost:8000/chat');
serve(config);