prompt-api-polyfill
v1.0.1
Published
Polyfill for the Prompt API (`LanguageModel`) backed by Firebase AI Logic, Gemini API, OpenAI API, or Transformers.js.
Maintainers
Readme
Prompt API Polyfill
This package provides a browser polyfill for the
Prompt API LanguageModel,
supporting dynamic backends:
- Firebase AI Logic (cloud)
- Google Gemini API (cloud)
- OpenAI API (cloud)
- Transformers.js (local after initial model download)
When loaded in the browser, it defines a global:
window.LanguageModel;so you can use the Prompt API shape even in environments where it is not yet natively available.
Supported Backends
Firebase AI Logic (cloud)
- Uses:
firebase/aiSDK. - Select by setting:
window.FIREBASE_CONFIG. - Model: Uses default if not specified (see
backends/defaults.js).
Google Gemini API (cloud)
- Uses:
@google/generative-aiSDK. - Select by setting:
window.GEMINI_CONFIG. - Model: Uses default if not specified (see
backends/defaults.js).
OpenAI API (cloud)
- Uses:
openaiSDK. - Select by setting:
window.OPENAI_CONFIG. - Model: Uses default if not specified (see
backends/defaults.js).
Transformers.js (local after initial model download)
- Uses:
@huggingface/transformersSDK. - Select by setting:
window.TRANSFORMERS_CONFIG. - Model: Uses default if not specified (see
backends/defaults.js).
Installation
Install from npm:
npm install prompt-api-polyfillQuick start
Backed by Firebase AI Logic (cloud)
- Create a Firebase project with Generative AI enabled.
- Provide your Firebase config on
window.FIREBASE_CONFIG. - Import the polyfill.
<script type="module">
import firebaseConfig from './.env.json' with { type: 'json' };
// Set FIREBASE_CONFIG to select the Firebase backend
window.FIREBASE_CONFIG = firebaseConfig;
if (!('LanguageModel' in window)) {
await import('prompt-api-polyfill');
}
const session = await LanguageModel.create();
</script>Backed by Gemini API (cloud)
- Get a Gemini API Key from Google AI Studio.
- Provide your API Key on
window.GEMINI_CONFIG. - Import the polyfill.
<script type="module">
// NOTE: Do not expose real keys in production source code!
// Set GEMINI_CONFIG to select the Gemini backend
window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };
if (!('LanguageModel' in window)) {
await import('prompt-api-polyfill');
}
const session = await LanguageModel.create();
</script>Backed by OpenAI API (cloud)
- Get an OpenAI API Key from the OpenAI Platform.
- Provide your API Key on
window.OPENAI_CONFIG. - Import the polyfill.
<script type="module">
// NOTE: Do not expose real keys in production source code!
// Set OPENAI_CONFIG to select the OpenAI backend
window.OPENAI_CONFIG = { apiKey: 'YOUR_OPENAI_API_KEY' };
if (!('LanguageModel' in window)) {
await import('prompt-api-polyfill');
}
const session = await LanguageModel.create();
</script>Backed by Transformers.js (local after initial model download)
- Only a dummy API Key required (runs locally in the browser).
- Provide configuration on
window.TRANSFORMERS_CONFIG. - Import the polyfill.
<script type="module">
// Set TRANSFORMERS_CONFIG to select the Transformers.js backend
window.TRANSFORMERS_CONFIG = {
apiKey: 'dummy', // Required for now by the loader
device: 'webgpu', // 'webgpu' or 'cpu'
dtype: 'q4f16', // Quantization level
};
if (!('LanguageModel' in window)) {
await import('prompt-api-polyfill');
}
const session = await LanguageModel.create();
</script>Configuration
Example (using a JSON config file)
Create a .env.json file (see
Configuring dot_env.json / .env.json)
and then use it from a browser entry point.
Example based on index.html in this repo
The included index.html demonstrates the full surface area of the polyfill,
including:
LanguageModel.create()with optionsprompt()andpromptStreaming()- Multimodal inputs (text, image, audio)
append()andmeasureInputUsage()- Quota handling via
onquotaoverflow clone()anddestroy()
A simplified version of how it is wired up:
<script type="module">
// Set GEMINI_CONFIG to select the Gemini backend
window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };
// Load the polyfill only when necessary
if (!('LanguageModel' in window)) {
await import('prompt-api-polyfill');
}
const controller = new AbortController();
const session = await LanguageModel.create();
try {
const stream = session.promptStreaming('Write me a very long poem', {
signal: controller.signal,
});
for await (const chunk of stream) {
console.log(chunk);
}
} catch (error) {
console.error(error);
}
</script>Configuring dot_env.json / .env.json
This repo ships with a template file:
// dot_env.json
{
// For Firebase AI Logic:
"projectId": "",
"appId": "",
"modelName": "",
// For Firebase AI Logic OR Gemini OR OpenAI OR Transformers.js:
"apiKey": "",
// For Transformers.js:
"device": "webgpu",
"dtype": "q4f16",
}You should treat dot_env.json as a template and create a real .env.json
that is not committed with your secrets.
Create .env.json
Copy the template:
cp dot_env.json .env.jsonThen open .env.json and fill in the values.
For Firebase AI Logic:
{
"apiKey": "YOUR_FIREBASE_WEB_API_KEY",
"projectId": "your-gcp-project-id",
"appId": "YOUR_FIREBASE_APP_ID",
"modelName": "choose-model-for-firebase"
}For Gemini:
{
"apiKey": "YOUR_GEMINI_CONFIG",
"modelName": "choose-model-for-gemini"
}For OpenAI:
{
"apiKey": "YOUR_OPENAI_API_KEY",
"modelName": "choose-model-for-openai"
}For Transformers.js:
{
"apiKey": "dummy",
"modelName": "onnx-community/gemma-3-1b-it-ONNX-GQA",
"device": "webgpu",
"dtype": "q4f16"
}Field-by-field explanation
apiKey:- Firebase AI Logic: Your Firebase Web API key.
- Gemini: Your Gemini API Key.
- OpenAI: Your OpenAI API Key.
- Transformers.js: Use
"dummy".
projectId/appId: Firebase AI Logic only.device: Transformers.js only. Either"webgpu"or"cpu".dtype: Transformers.js only. Quantization level (e.g.,"q4f16").modelName(optional): The model ID to use. If not provided, the polyfill uses the defaults defined inbackends/defaults.js.
Important: Do not commit a real
.env.jsonwith production credentials to source control. Usedot_env.jsonas the committed template and keep.env.jsonlocal.
Wiring the config into the polyfill
Once .env.json is filled out, you can import it and expose it to the polyfill.
See the Quick start examples above. For Transformers.js, ensure
you set window.TRANSFORMERS_CONFIG.
API surface
Once the polyfill is loaded and window.LanguageModel is available, you can use
it as described in the
Prompt API documentation.
For a complete, end-to-end example, see the index.html file in this directory.
Running the demo locally
Install dependencies:
npm installCopy and fill in your config:
cp dot_env.json .env.jsonServe
index.html:npm start
You should see network requests to the backends logs.
Testing
The project includes a comprehensive test suite that runs in a headless browser.
Running Browser Tests
Uses playwright to run tests in a real Chromium instance. This is the
recommended way to verify environmental fidelity and multimodal support.
npm run test:browserTo see the browser and DevTools while testing, you can modify
vitest.browser.config.js to set headless: false.
Create your own backend provider
If you want to add your own backend provider, these are the steps to follow.
Extend the base backend class
Create a new file in the backends/ directory, for example,
backends/custom.js. You need to extend the PolyfillBackend class and
implement the core methods that satisfy the expected interface.
import PolyfillBackend from './base.js';
import { DEFAULT_MODELS } from './defaults.js';
export default class CustomBackend extends PolyfillBackend {
constructor(config) {
// config typically comes from a window global (e.g., window.CUSTOM_CONFIG)
super(config.modelName || DEFAULT_MODELS.custom.modelName);
}
// Check if the backend is configured (e.g., API key is present), if given
// combinations of modelName and options are supported, or, for local model,
// if the model is available.
static availability(options) {
return window.CUSTOM_CONFIG?.apiKey ? 'available' : 'unavailable';
}
// Initialize the underlying SDK or API client. With local models, use
// monitorTarget to report model download progress to the polyfill.
createSession(options, sessionParams, monitorTarget) {
// Return the initialized session or client instance
}
// Non-streaming prompt execution
async generateContent(contents) {
// contents: Array of { role: 'user'|'model', parts: [{ text: string }] }
// Return: { text: string, usage: number }
}
// Streaming prompt execution
async generateContentStream(contents) {
// Return: AsyncIterable yielding chunks
}
// Token counting for quota/usage tracking
async countTokens(contents) {
// Return: total token count (number)
}
}Register your backend
The polyfill uses a "First-Match Priority" strategy based on global
configuration. You need to register your backend in the prompt-api-polyfill.js
file by adding it to the static #backends array:
// prompt-api-polyfill.js
static #backends = [
// ... existing backends
{
config: 'CUSTOM_CONFIG', // The global object to look for on `window`
path: './backends/custom.js',
},
];Set a default model
Define the fallback model identity in backends/defaults.js. This is used when
a user initializes a session without specifying a specific modelName.
// backends/defaults.js
export const DEFAULT_MODELS = {
// ...
custom: { modelName: 'custom-model-pro-v1' },
};Enable local development and testing
The project uses a discovery script (scripts/list-backends.js) to generate
test matrices. To include your new backend in the test runner, create a
.env-[name].json file (for example, .env-custom.json) in the root directory:
{
"apiKey": "your-api-key-here",
"modelName": "custom-model-pro-v1"
}Verify via Web Platform Tests (WPT)
The final step is ensuring compliance. Because the polyfill is spec-driven, any new backend should pass the official (or tentative) Web Platform Tests:
npm run test:wptThis verification step ensures that your backend handles things like
AbortSignal, system prompts, and history formatting exactly as the Prompt API
specification expects.
License
Apache 2.0
