npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

promptgun

v1.6.107

Published

The simplest, most advanced LLM prompting and agent client library for OpenAI:

Readme

Promptgun

The simplest, most advanced LLM prompting and agent client library for OpenAI:

  • Type-safe structured output, even when streaming
  • Function calling / tools with inline or reusable definitions
  • Feedback loops, response iteration based on response check
  • Conversations: trivally do multiple prompts while retaining context
  • Strict JSON mode: a prompting superpower for guaranteed schema compliance
  • Image generation: easy-to-use, well-documented, type-safe
  • Live model docs right in your IDE, hourly-updated from OpenAI website
  • Future: multi-provider support (drop me a message if you need it)

For example:

const restaurants = ai
  .chat("Recommend 3 restaurants in Paris perfect for today's weather")
  .tool('get-weather', s => s.string(), async loc => {
    // your logic, e.g.:
    const res = await fetch(`api.weather.com/v1/current?location=${loc}`)
    return res.json()
  })
  .getArray(s => s.object(o => o
    .hasString('name')
    .hasString('cuisine')
    .hasString('weatherReason', 'Why this restaurant fits today\'s weather')
  ))

for await (const restaurant of restaurants) {
  // do stuff with a type-safe, typed restaurant object,
  // streamed as soon as they come in
}

How to use

Data output - streamed

Stream data-aware:

const restaurantStream = ai
  .chat('Give 5 top bars in London')
  .getArray(d => d
    .object(obj => obj
      .hasString('name')
      .hasString('address', /* optional hint */ 'Street and address only!')
    )
  )
for await (const restaurant /* type safe */ of restaurantStream) {
  console.log(restaurant)
}

Note that you will see each restaurant logged as soon as the LLM outputs it. Note that Promptgun

  • told the LLM what shape its output data should be,
  • parsed that data to JS types,
  • reorganized the stream so that each "event" is a complete element of the requested output array,
  • correctly Typescript-typed each of those output elements and
  • passed your optional hints

If the output type is not an array, streaming it simply gives the stream of accumulated partial parsed JSON, "what has come in so far", as it comes in. The incomplete data is parsed for you as best as possible even as the underlying JSON that is received is incomplete.

const parsedPartialJsonStream = ai
  .chat('What is the best bar in Paris?')
  .getObject(o => o
    .hasString('name')
    .hasString('address')
  )
for await (const parsedPartialJson of parsedPartialJsonStream) {
  // do stuff with parsed partial json
}

Data output - single

Simply put await in front of your prompt to make it a single output rather than a streamed one:

const restaurants = await ai
  .chat('Give 5 top restaurants in London')
  .getArray(o => o
    .object(o => o
      .hasString('name')
      .canHaveString('description', 'A 50 character description')
    )
  )
// do stuff with array of type: {name: string, address?: string}[] 

Text output – streamed

const stream = ai.chat('How to make bread?')
let recipe = ''
for await (const chunk of stream) {
  recipe += chunk
}

Text output – single

Again, just add await:

const fruit = await ai.chat('What company makes the iPhone?')

Image generation

Promptgun has a simple, type-safe image generation API:

await ai
  .image('A black hole')
  .imageSize('1024x1024') // optional
  .model('gpt-image-1') // optional, default: gpt-image-1
  .toFile('blackhole.png')

This writes the file, but it also returns a reference to that file:

const file = await ai
  .image('A black hole')
  .toFile('blackhole.png')

You can also avoid writing a file altogether and get the byte array directly:

const byteArray = await ai.image('A black hole')

System and User Prompts

By default, .chat() with a string creates a user message. To add system instructions, pass message objects:

const response = await ai
  .chat(
    {system: 'You are a helpful assistant that speaks like a pirate'},
    {user: 'Tell me about the weather'}
  )

This works with structured output too:

const response = await ai
  .chat(
    {system: 'You are a JSON API that returns restaurant data'},
    {user: 'I want Italian restaurants in Rome'}
  )
  .getObject(o => o
    .hasString('name')
    .hasString('address')
  )

Conversations

With ai.createConversation() you can create a dedicated client for a single conversation. Every prompt you do gets appended to the message history. This saves you from having to re-add the old messages to the message history every time you want to do a new prompt while maintaining the context.

const conversation = ai.createConversation()
await conversation
  .chat('Give 5 top bars in London')
  .getArray(d => d
    .object(obj => obj
      .hasString('name')
      .hasString('address', /* optional hint */ 'Street and address only!')
    )
  )
await conversation
  .chat('Give 5 more')
  .getArray(d => d
    .object(obj => obj
      .hasString('name')
      .hasString('address', /* optional hint */ 'Street and address only!')
    )
  )

Response Iteration with .check()

Iterate on AI responses by providing feedback for corrections. The AI will automatically retry with your feedback until validation passes or max attempts are reached.

Basic Example

const password = await ai
  .chat("Generate a secure password")
  .getObject(o => o.hasString('pwd'))
  .check(({pwd}) => {
    if (pwd.length < 8) return 'Password must be at least 8 characters'
    if (!/[A-Z]/.test(pwd)) return 'Password must contain uppercase letter'
    if (!/[0-9]/.test(pwd)) return 'Password must contain a number'
    // Return nothing/null/undefined when validation passes
  }, 5) // Allow 5 attempts (default: 10)

Multiple Checks

You can chain multiple .check() calls - each maintains its own attempt counter:

const user = await ai
  .chat("Get user info for [email protected]")
  .getObject(o => o
    .hasString('email')
    .hasNumber('age')
    .hasString('country')
  )
  .check(({email}) => {
    if (!email.includes('@')) {
      return 'Email must be valid'
    }
  }, 3)
  .check(({age}) => {
    if (age < 0 || age > 150) {
      return 'Age must be between 0 and 150'
    }
  }, 5)

Error Handling

You can return errors as strings or throw them:

const result = await ai
  .chat("Calculate the answer")
  .getObject(o => o.hasNumber('result'))
  .check(({result}) => {
    // Option 1: Return error string - triggers feedback loop with AI
    if (result < 0) return 'Result must be positive'

    // Option 2: Throw error - IMMEDIATELY propagates without retry
    if (result > 1000) {
      throw new Error('Result too large - this error propagates immediately')
    }

    // Option 3: Return array of errors - triggers feedback loop with AI
    const errors = []
    if (result === 0) errors.push('Result cannot be zero')
    return errors
  })

Important Notes

  • Exception propagation: Any exception thrown inside the check callback is propagated unaltered and immediately bypasses the retry mechanism (attemptsPerCall). Use this for critical validation failures that shouldn't be retried.
  • Immediate error propagation: When max check attempts are reached, CheckFailedTooManyTimes error is thrown immediately without triggering the retry mechanism
  • Independent counters: Each .check() has its own attempt counter, separate from attemptsPerCall
  • Text responses: .check() works with both JSON (.getObject(), .getArray()) and text responses
  • Feedback loop: Failed validations (returned as strings/arrays) are sent back to the AI as feedback, allowing it to learn and correct
  • Error vs feedback: Throwing exceptions exits immediately; returning strings provides feedback for the AI to retry

Tools

Inline Tool Definitions

Tools can be added directly using .tool():

const result = await ai
  .chat("What's the weather in Paris?")
  .tool('get-weather', s => s.string(), async location => {
    const res = await fetch(`api.weather.com/v1/current?location=${location}`)
    return res.json()
  })

Reusable Tool Definitions

For tools used across multiple prompts, define them once with aiTool():

const getWeather = aiTool('get-weather', s => s.string(), async location => {
  const res = await fetch(`api.weather.com/v1/current?location=${location}`)
  return res.json()
})

// Use in multiple prompts
const parisWeather = await ai
  .chat("What's the weather in Paris?")
  .tools(getWeather)

const londonWeather = await ai
  .chat("What's the weather in London?")
  .tools(getWeather)

You can pass multiple tools using .tools():

await ai
  .chat("Plan a trip")
  .tools(getWeather, getFlights, getHotels)

Or add them one by one with .tool():

await ai
  .chat("Plan a trip")
  .tool(getWeather)
  .tool(getFlights)
  .tool(getHotels)

Strict JSON Mode

Enable OpenAI's strict JSON mode for guaranteed type-safe responses using .strict():

const user = await ai
  .chat("Get user info")
  .getObject(o => o
    .hasString('name')
    .hasNumber('age')
  )
  .strict()

Important: Nullable vs Optional Properties

In strict mode, you cannot use optional properties (TypeScript's ?). Instead, use nullable types:

// ❌ Not allowed in strict mode - optional properties
const user = await ai
  .chat("Get user info")
  .getObject(o => o
    .hasString('name')
    .canHaveString('nickname')  // ❌ This is {nickname?: string}
  )
  .strict()

// ✅ Allowed in strict mode - nullable properties
const user = await ai
  .chat("Get user info")
  .getObject(o => o
    .hasString('name')
    .has('nickname', s => s.string().orNull())  // ✅ This is {nickname: string | null}
  )
  .strict()

The difference:

  • Optional (canHaveString): Property may or may not exist → {nickname?: string} → Not allowed in strict mode
  • Nullable (string().orNull()): Property always exists but value can be null → {nickname: string | null} → Allowed in strict mode

Model Selection

By default, Promptgun uses GPT-5. Switch models using either a string:

// Using a string
const result = await ai
  .chat('Hello')
  .model('gpt-5')

or the AiModel enum:

// Using the enum - get in-place documentation!
const result = await ai
  .chat('Hello')
  .model(AiModel.GPT_5)

Why use the enum? The AiModel enum provides in-place documentation for each model, updated hourly from the OpenAI website. This gives you real-time information about capabilities, context windows, and pricing right in your IDE.

Other options

Using the flex tier

await conversation
  .chat('Describe a swine')
  .flex()

Setup

Before you do any prompts, do:

setupAI({
  promptGridApiKey: '<your PromptGrid API key>', // optional
  apiKeys: {
    openai: '<Your OpenAI API key>', // optional
    // etc
  },
  attemptsPerCall: 3 // optional, default 1
})

Get your PromptGrid API key for free at PromptGrid.ai.

Feedback and help

Post at our feedback and help board. We love to hear from you 👌☀️❤️.

Terms of use

If you supply promptGridApiKey, which is completely optional, some metadata of your prompt code, including the code of the callback you provide to the "completeChat" clause of a Promptgun call and where you call your prompts in your code, will be saved to your PromptGrid account. The content of individual prompt calls will not be stored in PromptGrid unless you opt in at promptgrid.ai/prompts. You can delete any data stored on PromptGrid at any time.