@venturialstd/chatgpt
v0.0.7
Published
ChatGPT API integration for Venturial
Keywords
Readme
@venturialstd/chatgpt
A comprehensive ChatGPT integration package for Venturial applications, built with NestJS. This package provides a complete interface to interact with OpenAI's ChatGPT API, including chat completions, text completions, and embeddings.
Features
- Chat Completions: Full support for ChatGPT chat completions with conversation history
- Text Completions: Support for traditional text completions
- Embeddings: Generate embeddings for text using OpenAI's embedding models
- Streaming Support: Both chat and completion endpoints support streaming responses
- Message Persistence: Automatically stores chat messages in a TypeORM-managed database
- Cost Tracking: Tracks token usage and calculates costs per request
- Dynamic Configuration: API key, model, and settings managed via Venturial's
SettingsService - Fully Typed: Complete TypeScript support with proper types
Installation
npm install @venturialstd/chatgpt
# or
yarn add @venturialstd/chatgptBasic Usage
1. Import the Module
import { Module } from '@nestjs/common';
import { ChatGptModule } from '@venturialstd/chatgpt';
@Module({
imports: [ChatGptModule],
// ...
})
export class AppModule {}2. Use the Chat Service
import { Injectable } from '@nestjs/common';
import { ChatGptChatService, ChatMessage } from '@venturialstd/chatgpt';
@Injectable()
export class MyService {
constructor(private readonly chatGptService: ChatGptChatService) {}
async askQuestion(question: string) {
const messages: ChatMessage[] = [
{
role: 'system',
content: 'You are a helpful assistant.',
},
{
role: 'user',
content: question,
},
];
const response = await this.chatGptService.createChatCompletion(messages);
return response.choices[0]?.message?.content;
}
}3. Use Completion Service
import { Injectable } from '@nestjs/common';
import { ChatGptCompletionService } from '@venturialstd/chatgpt';
@Injectable()
export class MyService {
constructor(private readonly completionService: ChatGptCompletionService) {}
async completeText(prompt: string) {
const completion = await this.completionService.createCompletion(prompt);
return completion.choices[0]?.text;
}
}4. Use Embedding Service
import { Injectable } from '@nestjs/common';
import { ChatGptEmbeddingService } from '@venturialstd/chatgpt';
@Injectable()
export class MyService {
constructor(private readonly embeddingService: ChatGptEmbeddingService) {}
async getEmbedding(text: string) {
const embedding = await this.embeddingService.createEmbedding(text);
return embedding.data[0]?.embedding;
}
}API Reference
ChatGptChatService
createChatCompletion(messages, options?)
Creates a chat completion with conversation history.
Parameters:
messages: Array ofChatMessageobjects withroleandcontentoptions: Optional configurationmodel: Model to use (defaults to configured model)temperature: Temperature for randomness (0-2)maxTokens: Maximum tokens to generatestore: Whether to store the message (default: true)metadata: Additional metadata to store
Returns: Promise<ChatCompletion>
createStreamingChatCompletion(messages, options?)
Creates a streaming chat completion.
Returns: Promise<AsyncIterable<ChatCompletion>>
getMessageHistory(limit?)
Retrieves message history from the database.
Parameters:
limit: Maximum number of messages to retrieve (default: 50)
Returns: Promise<ChatGptMessage[]>
ChatGptCompletionService
createCompletion(prompt, options?)
Creates a text completion.
Parameters:
prompt: The text promptoptions: Optional configurationmodel: Model to usetemperature: Temperature for randomnessmaxTokens: Maximum tokens to generatetopP: Nucleus sampling parameterfrequencyPenalty: Frequency penaltypresencePenalty: Presence penaltystop: Stop sequences
Returns: Promise<Completion>
createStreamingCompletion(prompt, options?)
Creates a streaming text completion.
Returns: Promise<AsyncIterable<Completion>>
ChatGptEmbeddingService
createEmbedding(input, options?)
Creates embeddings for text.
Parameters:
input: Single string or array of stringsoptions: Optional configurationmodel: Embedding model (default: 'text-embedding-3-small')dimensions: Number of dimensions for the embedding
Returns: Promise<CreateEmbeddingResponse>
Configuration
The module uses Venturial's SettingsService for configuration. Configure the following settings:
GLOBAL:CHATGPT:GENERAL:API_KEY: Your OpenAI API keyGLOBAL:CHATGPT:GENERAL:MODEL: Default model (e.g., 'gpt-4o', 'gpt-4o-mini')GLOBAL:CHATGPT:GENERAL:BASE_URL: API base URL (default: 'https://api.openai.com/v1')GLOBAL:CHATGPT:GENERAL:MAX_TOKENS: Maximum tokens (default: 2000)GLOBAL:CHATGPT:GENERAL:TEMPERATURE: Temperature (default: 0.7)
Supported Models
gpt-4o: Latest GPT-4 optimized modelgpt-4o-mini: Smaller, faster GPT-4 modelgpt-4-turbo-preview: GPT-4 Turbogpt-4: Standard GPT-4gpt-3.5-turbo: GPT-3.5 Turbo
Cost Tracking
The service automatically calculates costs based on:
- Model pricing (per 1M tokens)
- Token usage (input + output)
- Estimated 70/30 split for input/output tokens
Costs are stored in the ChatGptMessage entity for each request.
License
This package is part of the Venturial organization and follows the same license as the core packages.
