npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

baby-prompts

v2.2.4

Published

Providing basic prompt techniques and chains for OpenAI's response API

Readme

Baby Prompts

NPM Version NPM Downloads

A NodeJS library providing super basic prompt techniques and chains for OpenAI's response API.

example

Overview

👉 Baby Prompts is a NodeJS library that allows you to easily create different prompt techniques (see below) and choose formatting output. It also supports streaming and conversational history.

A full list of examples is available here, and this is a tutorial article to help you with the setup and explain the different prompting techniques.

Installation

Install the library by typing

npm install baby-prompts

To run the examples, you also need to have an account with OpenAI and sufficient credits in your account to run the models.

You then need to create a .env file with the variable OPENAI_API_KEY set to your own OpenAI API key and have that in your root folder. You can find an API key here.

⚠️ This is a library for NodeJS and relies on having a .env file with the OPEN_API_KEY in it. It won't work directly from the browser.

Prompt techniques examples

Before you can invoke any prompt, you need to configure your prompt by choosing a model.

// Import the necessary features
import {
  getPrompt,
  promptChain,
  invoke,
  outputText,
  user,
  assistant,
  developer,
  tap,
  json,
  withJsonFormatter,
  withPreviousResponse,
} from 'baby-prompts';

// Get the prompt function with default settings
const prompt = getPrompt();

// ...or select custom options
const prompt = getPrompt({
  model: 'gpt-5',
  reasoning: { effort: 'low' },
  stream: false,
});

The default model is gpt-4.1-mini.

Follow the OpenAI documentation for choosing models and options.

Here are some examples of how to use different prompting techniques:

  • Zero-shot prompting
  • Few-shot prompting
  • Prompt chains

1. Zero-shot prompting

Here is a simple example of invoking a prompt.

prompt(developer('Be a funny assistant'), 'Tell me a joke') // setup the prompt
  .pipe(invoke) // execute it
  .pipe(outputText) // extract the output_text from the response
  .pipe(tap); // print it

Note that the pipe method is basically a then call (i.e., method of a Promise). If you prefer to async/await, here is the same code.

const result = await invoke(
  prompt(developer('Be a funny assistant'), 'Tell me a joke')
);
console.log(outputText(result)); // or result.output_text

You can find a full working example here.

2. Few-shot prompting

Multiple messages can be combined before invocation, and you can choose the user (default), developer, or assistant roles. Note that this is still a single prompt (a single invoke method is called).

prompt(
  developer('Ask a question following this style'),
  user('How are you?'),
  assistant('How are you, human?'),
  user('What time is it?'),
  assistant('What time is it, human?'),
  user('Where are you from?'),
  assistant('Where are you from, human?'),
  user('What is your age?') // expected: "What is your age, human?"
)
  .pipe(invoke)
  .pipe(outputText)
  .pipe(console.log);

You can find a full working example here.

3. Prompt chaining

With prompt chaining, you can chain the output of a prompt directly into the input of the next one.

promptChain(
  prompt(user('What is 1+1?')),
  prompt(user('Say that without using numbers.')),
  prompt(user('Add an emoji at the end.'))
)
  .pipe(outputText)
  .pipe(console.log);

Please note that when using chains, you do not need to call the invoke method manually, as it is called for you by the promptChain function.

For more complex examples, involving the usage of the tap function and formatted output, look at this.

Streaming

When you create a prompt at first you can pass the stream: true option to enable streaming.

const stream = await prompt(user('Write a paragraph about the ocean'))
  .pipe(withOptions({ stream: true })) // require to have a streamed response
  .pipe(invoke);

for await (const event of stream) {
  if (event.type == 'response.output_text.delta')
    process.stdout.write(event.delta);
}

This is an example.

Structured output

You can structure the output of a prompt just before invocation. For that, you need to use the zod library, which is already included as a dependency.

import { z } from 'zod';

const Person = z.object({
  name: z.string(),
  age: z.number(),
});

const PeopleList = z.object({
  people: z.array(Person).length(10), // exactly 10 people
});

// prompt
prompt(
  developer('You are a helpful assistant'), //
  'Write a list of 10 people with name and age'
)
  .pipe(withJsonFormatter(PeopleList)) // format the output
  .pipe(invoke)
  .pipe(json)
  .pipe(console.log);

Here a full working example.

Conversational history

You can preserve information across multiple messages or turns in a conversation by passing a response as an input to the next prompt using the withPreviousResponse function.

Here a couple of examples.

// Get the first prompt
const res = await prompt('My name is Jon Snow.').pipe(invoke);

// Using a single prompt
prompt('What is my name?')
  .then(withPreviousResponse(res)) // pass in the previous response
  .then(invoke)
  .then(outputText)
  .then(console.log); // "Jon Snow"

// Using a chain
promptChain(
  prompt('What is my name?').then(withPreviousResponse(res)), // pass in the previous response
  prompt('Add an emoji to my name.')
)
  .then(outputText)
  .then(console.log); // "Jon Snow 🐺"

Here the full example.

Notes for version 2.2.x

There a few breaking changes from version 2.2.x onward.

The function getPrompt now only takes as parameter the default options to tweak the model.

The function jsonFormatter has been renamed to withJsonFormatter for consistency with other similar functions.

Credits

Developed by MAKinteract with ♥️.