npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

language-model-polyfill

v0.1.7

Published

A Polyfill for the Prompt API (window.LanguageModel) based on Transformers.js

Downloads

301

Readme

Language Model Polyfill

A polyfill for Chrome's Prompt API that provides window.LanguageModel support in browsers using Transformers.js and WebGPU.

⚠️ Early Preview: This is an early preview release. Only core features are currently implemented.

Features

  • Standards-compliant: Implements the Chrome Prompt API interface
  • On-device inference: Runs entirely in the browser using WebGPU
  • Streaming support: Real-time token generation with promptStreaming()
  • Conversation history: Maintains context across multiple prompts
  • Download progress: Monitor model download progress
  • Abort support: Cancel generation using AbortController
  • KV cache: Optimized token generation with key-value caching

Requirements

  • WebGPU support: Your browser must support WebGPU
  • Storage: ~2.6 GB for the default model (Qwen3-4B-ONNX)
  • Memory: Sufficient RAM/VRAM for model inference

Installation

npm install language-model-polyfill

Usage

Load the polyfill only when window.LanguageModel is not available:

import { LanguageModelPolyfill } from 'language-model-polyfill';

// Apply polyfill if native API is not available
if (!window.LanguageModel) {
  window.LanguageModel = LanguageModelPolyfill;
}

// Now use the standard Prompt API
const session = await window.LanguageModel.create();
const response = await session.prompt("Write a haiku about coding");
console.log(response);

Using a CDN

<script type="module">
  import { LanguageModelPolyfill } from 'https://cdn.jsdelivr.net/npm/language-model-polyfill/+esm';

  if (!('LanguageModel' in window)) {
    window.LanguageModel = LanguageModelPolyfill;
  }

  // Use the Prompt API as normal
  const session = await window.LanguageModel.create();
  // ...
</script>

Using a CDN conditionally

For better performance, load the polyfill only when needed using dynamic imports:

<script type="module">
  // Load polyfill only if native LanguageModel API is not available
  if (!('LanguageModel' in window)) {
    const { LanguageModelPolyfill } = await import('https://cdn.jsdelivr.net/npm/language-model-polyfill/+esm');
    window.LanguageModel = LanguageModelPolyfill;
  }

  // Now use the API (native or polyfilled)
  const session = await window.LanguageModel.create();
</script>

This approach ensures the polyfill is only downloaded and executed in browsers that don't have native support.

Automatic Polyfill

For convenience, you can automatically apply the polyfill:

import { LanguageModelPolyfill } from 'language-model-polyfill';

window.LanguageModel ??= LanguageModelPolyfill;

// Now just use window.LanguageModel
const availability = await window.LanguageModel.availability();
if (availability !== "unavailable") {
  const session = await window.LanguageModel.create();
  const result = await session.prompt("Hello!");
}

API Documentation

This polyfill implements the standard Chrome Prompt API. For complete API documentation, see:

Chrome Prompt API Documentation

Supported Features

  • LanguageModel.create()
  • LanguageModel.availability()
  • session.prompt()
  • session.promptStreaming()
  • ✅ Streaming with async iterators
  • ✅ Conversation history (initialPrompts)
  • ✅ Download progress monitoring
  • AbortSignal support
  • temperature and topK configuration
  • LanguageModel.append()
  • LanguageModel.measureInputUsage()

Not Yet Supported

Example

See examples/simple-app for a complete working example.

Model Information

  • Model: onnx-community/Qwen3-4B-ONNX
  • Size: ~2.6 GB
  • Quantization: 4-bit floating point (q4f16)
  • Context Window: 40,000 tokens

Model information:
https://huggingface.co/Qwen/Qwen3-4B

ONNX conversation:
https://huggingface.co/onnx-community/Qwen3-4B-ONNX

In my opinion, the Qwen3 is currently the best model for general tasks. However, there are a few limitations. It does not support tool calls, structured output and multimodal input. For this reason, it has not yet been implemented. However, I can well imagine relying on newer models in the future to implement these functionalities.

License

MIT © Nico Martin

Links