npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

prompt-api-polyfill

v1.0.1

Published

Polyfill for the Prompt API (`LanguageModel`) backed by Firebase AI Logic, Gemini API, OpenAI API, or Transformers.js.

Readme

Prompt API Polyfill

This package provides a browser polyfill for the Prompt API LanguageModel, supporting dynamic backends:

  • Firebase AI Logic (cloud)
  • Google Gemini API (cloud)
  • OpenAI API (cloud)
  • Transformers.js (local after initial model download)

When loaded in the browser, it defines a global:

window.LanguageModel;

so you can use the Prompt API shape even in environments where it is not yet natively available.

Supported Backends

Firebase AI Logic (cloud)

  • Uses: firebase/ai SDK.
  • Select by setting: window.FIREBASE_CONFIG.
  • Model: Uses default if not specified (see backends/defaults.js).

Google Gemini API (cloud)

  • Uses: @google/generative-ai SDK.
  • Select by setting: window.GEMINI_CONFIG.
  • Model: Uses default if not specified (see backends/defaults.js).

OpenAI API (cloud)

  • Uses: openai SDK.
  • Select by setting: window.OPENAI_CONFIG.
  • Model: Uses default if not specified (see backends/defaults.js).

Transformers.js (local after initial model download)

  • Uses: @huggingface/transformers SDK.
  • Select by setting: window.TRANSFORMERS_CONFIG.
  • Model: Uses default if not specified (see backends/defaults.js).

Installation

Install from npm:

npm install prompt-api-polyfill

Quick start

Backed by Firebase AI Logic (cloud)

  1. Create a Firebase project with Generative AI enabled.
  2. Provide your Firebase config on window.FIREBASE_CONFIG.
  3. Import the polyfill.
<script type="module">
  import firebaseConfig from './.env.json' with { type: 'json' };

  // Set FIREBASE_CONFIG to select the Firebase backend
  window.FIREBASE_CONFIG = firebaseConfig;

  if (!('LanguageModel' in window)) {
    await import('prompt-api-polyfill');
  }

  const session = await LanguageModel.create();
</script>

Backed by Gemini API (cloud)

  1. Get a Gemini API Key from Google AI Studio.
  2. Provide your API Key on window.GEMINI_CONFIG.
  3. Import the polyfill.
<script type="module">
  // NOTE: Do not expose real keys in production source code!
  // Set GEMINI_CONFIG to select the Gemini backend
  window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };

  if (!('LanguageModel' in window)) {
    await import('prompt-api-polyfill');
  }

  const session = await LanguageModel.create();
</script>

Backed by OpenAI API (cloud)

  1. Get an OpenAI API Key from the OpenAI Platform.
  2. Provide your API Key on window.OPENAI_CONFIG.
  3. Import the polyfill.
<script type="module">
  // NOTE: Do not expose real keys in production source code!
  // Set OPENAI_CONFIG to select the OpenAI backend
  window.OPENAI_CONFIG = { apiKey: 'YOUR_OPENAI_API_KEY' };

  if (!('LanguageModel' in window)) {
    await import('prompt-api-polyfill');
  }

  const session = await LanguageModel.create();
</script>

Backed by Transformers.js (local after initial model download)

  1. Only a dummy API Key required (runs locally in the browser).
  2. Provide configuration on window.TRANSFORMERS_CONFIG.
  3. Import the polyfill.
<script type="module">
  // Set TRANSFORMERS_CONFIG to select the Transformers.js backend
  window.TRANSFORMERS_CONFIG = {
    apiKey: 'dummy', // Required for now by the loader
    device: 'webgpu', // 'webgpu' or 'cpu'
    dtype: 'q4f16', // Quantization level
  };

  if (!('LanguageModel' in window)) {
    await import('prompt-api-polyfill');
  }

  const session = await LanguageModel.create();
</script>

Configuration

Example (using a JSON config file)

Create a .env.json file (see Configuring dot_env.json / .env.json) and then use it from a browser entry point.

Example based on index.html in this repo

The included index.html demonstrates the full surface area of the polyfill, including:

  • LanguageModel.create() with options
  • prompt() and promptStreaming()
  • Multimodal inputs (text, image, audio)
  • append() and measureInputUsage()
  • Quota handling via onquotaoverflow
  • clone() and destroy()

A simplified version of how it is wired up:

<script type="module">
  // Set GEMINI_CONFIG to select the Gemini backend
  window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };

  // Load the polyfill only when necessary
  if (!('LanguageModel' in window)) {
    await import('prompt-api-polyfill');
  }

  const controller = new AbortController();
  const session = await LanguageModel.create();

  try {
    const stream = session.promptStreaming('Write me a very long poem', {
      signal: controller.signal,
    });

    for await (const chunk of stream) {
      console.log(chunk);
    }
  } catch (error) {
    console.error(error);
  }
</script>

Configuring dot_env.json / .env.json

This repo ships with a template file:

// dot_env.json
{
  // For Firebase AI Logic:
  "projectId": "",
  "appId": "",
  "modelName": "",

  // For Firebase AI Logic OR Gemini OR OpenAI OR Transformers.js:
  "apiKey": "",

  // For Transformers.js:
  "device": "webgpu",
  "dtype": "q4f16",
}

You should treat dot_env.json as a template and create a real .env.json that is not committed with your secrets.

Create .env.json

Copy the template:

cp dot_env.json .env.json

Then open .env.json and fill in the values.

For Firebase AI Logic:

{
  "apiKey": "YOUR_FIREBASE_WEB_API_KEY",
  "projectId": "your-gcp-project-id",
  "appId": "YOUR_FIREBASE_APP_ID",
  "modelName": "choose-model-for-firebase"
}

For Gemini:

{
  "apiKey": "YOUR_GEMINI_CONFIG",
  "modelName": "choose-model-for-gemini"
}

For OpenAI:

{
  "apiKey": "YOUR_OPENAI_API_KEY",
  "modelName": "choose-model-for-openai"
}

For Transformers.js:

{
  "apiKey": "dummy",
  "modelName": "onnx-community/gemma-3-1b-it-ONNX-GQA",
  "device": "webgpu",
  "dtype": "q4f16"
}

Field-by-field explanation

  • apiKey:

    • Firebase AI Logic: Your Firebase Web API key.
    • Gemini: Your Gemini API Key.
    • OpenAI: Your OpenAI API Key.
    • Transformers.js: Use "dummy".
  • projectId / appId: Firebase AI Logic only.

  • device: Transformers.js only. Either "webgpu" or "cpu".

  • dtype: Transformers.js only. Quantization level (e.g., "q4f16").

  • modelName (optional): The model ID to use. If not provided, the polyfill uses the defaults defined in backends/defaults.js.

Important: Do not commit a real .env.json with production credentials to source control. Use dot_env.json as the committed template and keep .env.json local.

Wiring the config into the polyfill

Once .env.json is filled out, you can import it and expose it to the polyfill. See the Quick start examples above. For Transformers.js, ensure you set window.TRANSFORMERS_CONFIG.


API surface

Once the polyfill is loaded and window.LanguageModel is available, you can use it as described in the Prompt API documentation.

For a complete, end-to-end example, see the index.html file in this directory.


Running the demo locally

  1. Install dependencies:

    npm install
  2. Copy and fill in your config:

    cp dot_env.json .env.json
  3. Serve index.html:

    npm start

You should see network requests to the backends logs.


Testing

The project includes a comprehensive test suite that runs in a headless browser.

Running Browser Tests

Uses playwright to run tests in a real Chromium instance. This is the recommended way to verify environmental fidelity and multimodal support.

npm run test:browser

To see the browser and DevTools while testing, you can modify vitest.browser.config.js to set headless: false.


Create your own backend provider

If you want to add your own backend provider, these are the steps to follow.

Extend the base backend class

Create a new file in the backends/ directory, for example, backends/custom.js. You need to extend the PolyfillBackend class and implement the core methods that satisfy the expected interface.

import PolyfillBackend from './base.js';
import { DEFAULT_MODELS } from './defaults.js';

export default class CustomBackend extends PolyfillBackend {
  constructor(config) {
    // config typically comes from a window global (e.g., window.CUSTOM_CONFIG)
    super(config.modelName || DEFAULT_MODELS.custom.modelName);
  }

  // Check if the backend is configured (e.g., API key is present), if given
  // combinations of modelName and options are supported, or, for local model,
  // if the model is available.
  static availability(options) {
    return window.CUSTOM_CONFIG?.apiKey ? 'available' : 'unavailable';
  }

  // Initialize the underlying SDK or API client. With local models, use
  // monitorTarget to report model download progress to the polyfill.
  createSession(options, sessionParams, monitorTarget) {
    // Return the initialized session or client instance
  }

  // Non-streaming prompt execution
  async generateContent(contents) {
    // contents: Array of { role: 'user'|'model', parts: [{ text: string }] }
    // Return: { text: string, usage: number }
  }

  // Streaming prompt execution
  async generateContentStream(contents) {
    // Return: AsyncIterable yielding chunks
  }

  // Token counting for quota/usage tracking
  async countTokens(contents) {
    // Return: total token count (number)
  }
}

Register your backend

The polyfill uses a "First-Match Priority" strategy based on global configuration. You need to register your backend in the prompt-api-polyfill.js file by adding it to the static #backends array:

// prompt-api-polyfill.js
static #backends = [
  // ... existing backends
  {
    config: 'CUSTOM_CONFIG', // The global object to look for on `window`
    path: './backends/custom.js',
  },
];

Set a default model

Define the fallback model identity in backends/defaults.js. This is used when a user initializes a session without specifying a specific modelName.

// backends/defaults.js
export const DEFAULT_MODELS = {
  // ...
  custom: { modelName: 'custom-model-pro-v1' },
};

Enable local development and testing

The project uses a discovery script (scripts/list-backends.js) to generate test matrices. To include your new backend in the test runner, create a .env-[name].json file (for example, .env-custom.json) in the root directory:

{
  "apiKey": "your-api-key-here",
  "modelName": "custom-model-pro-v1"
}

Verify via Web Platform Tests (WPT)

The final step is ensuring compliance. Because the polyfill is spec-driven, any new backend should pass the official (or tentative) Web Platform Tests:

npm run test:wpt

This verification step ensures that your backend handles things like AbortSignal, system prompts, and history formatting exactly as the Prompt API specification expects.


License

Apache 2.0