npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@venturialstd/chatgpt

v0.0.7

Published

ChatGPT API integration for Venturial

Readme

@venturialstd/chatgpt

A comprehensive ChatGPT integration package for Venturial applications, built with NestJS. This package provides a complete interface to interact with OpenAI's ChatGPT API, including chat completions, text completions, and embeddings.


Features

  • Chat Completions: Full support for ChatGPT chat completions with conversation history
  • Text Completions: Support for traditional text completions
  • Embeddings: Generate embeddings for text using OpenAI's embedding models
  • Streaming Support: Both chat and completion endpoints support streaming responses
  • Message Persistence: Automatically stores chat messages in a TypeORM-managed database
  • Cost Tracking: Tracks token usage and calculates costs per request
  • Dynamic Configuration: API key, model, and settings managed via Venturial's SettingsService
  • Fully Typed: Complete TypeScript support with proper types

Installation

npm install @venturialstd/chatgpt
# or
yarn add @venturialstd/chatgpt

Basic Usage

1. Import the Module

import { Module } from '@nestjs/common';
import { ChatGptModule } from '@venturialstd/chatgpt';

@Module({
  imports: [ChatGptModule],
  // ...
})
export class AppModule {}

2. Use the Chat Service

import { Injectable } from '@nestjs/common';
import { ChatGptChatService, ChatMessage } from '@venturialstd/chatgpt';

@Injectable()
export class MyService {
  constructor(private readonly chatGptService: ChatGptChatService) {}

  async askQuestion(question: string) {
    const messages: ChatMessage[] = [
      {
        role: 'system',
        content: 'You are a helpful assistant.',
      },
      {
        role: 'user',
        content: question,
      },
    ];

    const response = await this.chatGptService.createChatCompletion(messages);
    return response.choices[0]?.message?.content;
  }
}

3. Use Completion Service

import { Injectable } from '@nestjs/common';
import { ChatGptCompletionService } from '@venturialstd/chatgpt';

@Injectable()
export class MyService {
  constructor(private readonly completionService: ChatGptCompletionService) {}

  async completeText(prompt: string) {
    const completion = await this.completionService.createCompletion(prompt);
    return completion.choices[0]?.text;
  }
}

4. Use Embedding Service

import { Injectable } from '@nestjs/common';
import { ChatGptEmbeddingService } from '@venturialstd/chatgpt';

@Injectable()
export class MyService {
  constructor(private readonly embeddingService: ChatGptEmbeddingService) {}

  async getEmbedding(text: string) {
    const embedding = await this.embeddingService.createEmbedding(text);
    return embedding.data[0]?.embedding;
  }
}

API Reference

ChatGptChatService

createChatCompletion(messages, options?)

Creates a chat completion with conversation history.

Parameters:

  • messages: Array of ChatMessage objects with role and content
  • options: Optional configuration
    • model: Model to use (defaults to configured model)
    • temperature: Temperature for randomness (0-2)
    • maxTokens: Maximum tokens to generate
    • store: Whether to store the message (default: true)
    • metadata: Additional metadata to store

Returns: Promise<ChatCompletion>

createStreamingChatCompletion(messages, options?)

Creates a streaming chat completion.

Returns: Promise<AsyncIterable<ChatCompletion>>

getMessageHistory(limit?)

Retrieves message history from the database.

Parameters:

  • limit: Maximum number of messages to retrieve (default: 50)

Returns: Promise<ChatGptMessage[]>

ChatGptCompletionService

createCompletion(prompt, options?)

Creates a text completion.

Parameters:

  • prompt: The text prompt
  • options: Optional configuration
    • model: Model to use
    • temperature: Temperature for randomness
    • maxTokens: Maximum tokens to generate
    • topP: Nucleus sampling parameter
    • frequencyPenalty: Frequency penalty
    • presencePenalty: Presence penalty
    • stop: Stop sequences

Returns: Promise<Completion>

createStreamingCompletion(prompt, options?)

Creates a streaming text completion.

Returns: Promise<AsyncIterable<Completion>>

ChatGptEmbeddingService

createEmbedding(input, options?)

Creates embeddings for text.

Parameters:

  • input: Single string or array of strings
  • options: Optional configuration
    • model: Embedding model (default: 'text-embedding-3-small')
    • dimensions: Number of dimensions for the embedding

Returns: Promise<CreateEmbeddingResponse>


Configuration

The module uses Venturial's SettingsService for configuration. Configure the following settings:

  • GLOBAL:CHATGPT:GENERAL:API_KEY: Your OpenAI API key
  • GLOBAL:CHATGPT:GENERAL:MODEL: Default model (e.g., 'gpt-4o', 'gpt-4o-mini')
  • GLOBAL:CHATGPT:GENERAL:BASE_URL: API base URL (default: 'https://api.openai.com/v1')
  • GLOBAL:CHATGPT:GENERAL:MAX_TOKENS: Maximum tokens (default: 2000)
  • GLOBAL:CHATGPT:GENERAL:TEMPERATURE: Temperature (default: 0.7)

Supported Models

  • gpt-4o: Latest GPT-4 optimized model
  • gpt-4o-mini: Smaller, faster GPT-4 model
  • gpt-4-turbo-preview: GPT-4 Turbo
  • gpt-4: Standard GPT-4
  • gpt-3.5-turbo: GPT-3.5 Turbo

Cost Tracking

The service automatically calculates costs based on:

  • Model pricing (per 1M tokens)
  • Token usage (input + output)
  • Estimated 70/30 split for input/output tokens

Costs are stored in the ChatGptMessage entity for each request.


License

This package is part of the Venturial organization and follows the same license as the core packages.