npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@bedrockio/ai

v0.9.3

Published

Bedrock wrapper for common AI chatbots.

Downloads

1,056

Readme

@bedrockio/ai

This package provides a thin wrapper for common AI chatbots. It standardizes usage to allow different platforms to be swapped easily and allows templated usage.

Install

yarn install @bedrockio/ai

Usage

import yd from '@bedrockio/yada';
import { createClient } from '@bedrockio/ai';

const client = createClient({
  // Directory to templates
  templates: './test/templates',
  // Platform: openai|gpt|anthopic|claude
  platform: 'openai',
  // Your API key
  apiKey: 'my-api-key',
});

// Get a one time response.
const response = await client.prompt({
  // The template to use. If no template is found will
  // use this string as the template.
  template: 'classify-fruits',
  // The form of output. May be raw|text|messages|json.
  // Default is "text".
  output: 'json',

  // Aa yada schema (or any JSON schema) may be passed
  // here to define structured output.
  schema: yd.object({
    name: yd.string(),
  })

  // All other variables will be
  // interpolated into the template.
  text: 'a long yellow fruit',
  fruit: 'banana, apple, pear',
});

Streaming

Responses may be streamed:

// Stream the results
const stream = await client.stream({
  template: 'classify-fruits',

  // See below.
  extractMessages: 'text',
});

// Will return an AsyncIterator
for await (const event of stream) {
  console.info(event.text);
}

Event types:

  • start - Response has been initiated. This event also contains an id field. that can be passsed back in as prevResponseId (OpenAI/Grok only).
  • stop - Response has finished. Contains the id field and usage data.
  • delta- Main text delta event when a new token is output.
  • done - Text has stopped.
  • extract:delta - Used with extractMessages (see below).
  • extract:done - Used with extractMessages (see below).

Streaming Structured Data

Often you want prompt responses to be structured JSON, however you still want to stream the user-facing message. In this case use the extractMessages option to define the key of the structured output you want to stream. When this is defined you receive additional extract:delta and extract:done events. These will stream even as the partial JSON data comes in.

Streaming Notes

Note that in addition to streaming partial data above, there are 2 other valid approaches:

  1. Send two prompts, one for the message and one for the extracted data. This works, however there are edge cases when there needs to correlation between the responses. For example when asking the user a "next question" in text but extracting the type of question in data, the results may not match depending on the LLM temperament. This also will increase token usage.

  2. Use function calls, ie "tools". This approach seems more appropriate as function calls stream separately to text output and can easily be multiplexed, however at the time of this writing there seem to me issues with ensuring tht the LLM actually uses the correct tools and results have been flaky. Depending on the approach this may also increase token usage.

For the reasons above currently the most reliable approach to streaming structured data is using extractMessage to stream the partial JSON response.

Templates

Template files must be markdown (.md) and live in your templates directory. These will be passed as instructions, or the equivalent to the developer role.

Which fruit do you think the following input most closely resembles?

Please provide your response as a JSON object containing:

- "name" {string} - The name of the fruit.
- "reason" {string} - The reason you believe it matches.
- "certainty" {number} - Your confidence in your answer from 0 to 1.

```
{{text}}
```

Platforms

Currently supported platforms:

  • OpenAI (ChatGPT)
  • Anthropic (Claude)
  • xAi (Grok).

Models

Available models can be listed with:

const models = await client.models();