npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

llm_response

v0.0.33

Published

Simple library to get response from OpenAI, Claude, Ollama. With optional RAG feature.

Downloads

206

Readme

LLM response library

A collection of simplified server tools for using LLMs with additional context (Rag system) or without it.

Capabilities of the tools in this library:

  • Generate repsonses for any prompts through OpenAI API key

  • Create a model with loaded context file (Ollama, OpenAI or Anthropic)

  • Generate voice overs through OpenAI

  • Generate images through OpenAI DALLE

  • Generate music through Replicate

When setting up models you can specify their name, temperature and voice model for voice generation. You can also turn off voice generation if not needed.

Install the library

Via npm

Run the following command:

npm i llm_response

Then add initial setup code in index.ts:

let mainChain;
setupOpenAIKey(process.env.OPEN_API_KEY); // insert your OpenAI API key here
setTimeout(async ()=>{
    // specify desired model type through modelTypes enum, specify context file and its extension (txt, pdf)
    mainChain = await createRagChain(modelTypes.openAI,{src:"<path_to_the_file>.txt",type:'txt'});
},10);

You can use some global variable to store created Rag Chain.

Using different model types

If you use Anthropic model type for Rag system, before calling setMainChain(), add:

setupAnthropicKey(process.env.ANTHROPIC_API_KEY); // insert your Anthropic API key here

If you don't use OpenAI for Rag system, you can skip setupOpenAIKey(), but basic prompt responses will not work, as well as voice generation.

For using ollama just run ollama model on the same machine (it uses 'Mistral' as default, but you can change it)

Arguments for tuning rag system in function setMainChain():

  • model type, use modelTypes enum (Ollama, OpenAI, Anthropic)
  • file, consists of src (source file path) and type ('txt' or 'pdf')
  • modelName for specification of particular model names
  • temperature, set at 0.2 at default
  • baseUrl for Ollama, set to "http://localhost:11434" by default

Getting prompt response without Rag

call await getLLMText() function, it has following arguments:

  • systemMessage, system prompt
  • prompt, user prompt

it will return string response

Using Rag

use a created before class instance (through createRagChain() function) and call its getRagAnswer(prompt: string), it will result in answer string

Rag and using files

When using Rag, you provide a TXT or PDF file as a context. Place desired files locally and when using createRagChain, put local path to file in src field (check Via npm to see function format)

Voice generation

If you want to use voice generation, you need to:

  1. Setup OpenAI API key as written at the start of this readme
  2. Use Express module to input app into prompt functions

Using app from Express example:

// app config, aside from all other needed configs
let appReadyPromiseResolve: (arg0: express.Express) => void;
const appReadyPromise = new Promise((resolve) => {
    appReadyPromiseResolve = resolve;
});
config({initializeExpress: (app) => {
    appReadyPromiseResolve(app)}})

// getting the app needed for functions of this module
const app = await appReadyPromise;

When ready, call async function getLLMTextAndVoice() or getRagAnswerAndVoice() just like their base functions described in previous sections, but add following arguments:

  • app, you need to add app from Express module here
  • voiceModel, set to 'alloy' by default

it will return response, where response is generated answer and exposedURL is full path to generated voice. It will look like this:

{response, exposedURL} = await mainChain.getRagAnswer(promptText)

You can change openAI voice generation to Xenova by turning it on with

setOSVoiceGeneration(true);

Which will make all voice generation in the app with Xenova transformers. In this case you may skip openAI key setup (for voice generation purposes)

Image generation

To generate image based on prompt, use

const imageLocalUrl = await generateAndSaveImage(prompt, app);

Prompt is your string prompt to base generation on. App should be provided from Express module, see Voice generation for app example. Result of this function is full url where image is located.

You can also use Inpaint by setting up inpaint URL and calling image generation:

setupInpaintUrl("url");
const inpaintImageUrl = await inpaintImage("prompt");

Will return generated image local url

Music generation

Music generation is not tied to openAi or other llm. There are 2 ways of genrating music: Replicate and Xenova transformers.

For Replicate setup Replicate API key:

setupReplicateKey(process.env.REPLICATE_API_TOKEN);

And then call

await generateMusic("your prompt");

It will return url to music file online

For Xenova transformers use

await generateMusicOS("your prompt", app);

It will return local url to generated music. App is transferred inside this function just like in voice generation chapter

Configured AI responses

You can use following openAI methods to use a config object as system message. They work identically to original methods, but use config object instead of systemMessage parameter.

await getLLmTextConfigured();
await getLLmTextAndVoiceConfigured();