@p0u4a/inference-worker
v0.1.2
Published
A generic, extensible Web Worker for running HuggingFace Transformers inference in the browser. Works with any pretrained model, supports WebGPU acceleration, model load progress reporting, cancellation, and optional tool call parsing.
Readme
Inference Worker
A generic, extensible Web Worker for running HuggingFace Transformers inference in the browser. Works with any pretrained model, supports WebGPU acceleration, model load progress reporting, cancellation, and optional tool call parsing.
Installation
npm install @p0u4a/inference-workerQuickstart
import { InferenceClient } from "@p0u4a/inference-worker";
const client = new InferenceClient({
worker: new URL("@p0u4a/inference-worker/worker", import.meta.url),
config: {
modelId: "Qwen/Qwen2.5-0.5B-Instruct",
dtype: "q4",
device: "webgpu",
},
onProgress: (p) => {
if (p.status === "progress") {
console.log(`Loading ${p.file}: ${Math.round(p.progress)}%`);
}
},
});
await client.init();
const result = await client.execute({
prompt: "Explain quantum computing in one sentence.",
generationParams: { maxNewTokens: 64 },
});
console.log(result.rawOutput);
client.dispose();With tool call parsing
The toolCallParser runs on the main thread after the worker returns raw output. Define any parsing logic you need:
interface ToolCall {
name: string;
arguments: Record<string, unknown>;
}
const client = new InferenceClient<ToolCall[]>({
worker: new URL("@p0u4a/inference-worker/worker", import.meta.url),
config: {
modelId: "Qwen/Qwen2.5-0.5B-Instruct",
dtype: "q4",
device: "webgpu",
},
toolCallParser: (raw) => {
// Implement your own parsing logic based on the model's output format
const match = raw.match(/\{.*\}/s);
return match ? [JSON.parse(match[0])] : [];
},
});
await client.init();
const result = await client.execute({
prompt: "What is the weather in London?",
systemPrompt: "You can call functions to get information.",
tools: [{ type: "function", function: { name: "get_weather", parameters: { location: { type: "string" } } } }],
generationParams: { maxNewTokens: 128 },
});
// result.toolCalls is typed as ToolCall[]
console.log(result.toolCalls);Cancellation
const resultPromise = client.execute({ prompt: "Write a long essay..." });
// Cancel mid-inference
client.cancel();
try {
await resultPromise;
} catch (e) {
if (e instanceof DOMException && e.name === "AbortError") {
console.log("Cancelled");
}
}Resource cleanup
InferenceClient implements Disposable, so you can use using for automatic cleanup:
using client = new InferenceClient({ ... });
await client.init();
const result = await client.execute({ prompt: "Hello" });
// Worker is terminated when `client` goes out of scopeOr call dispose() manually when done.
API
InferenceClient<TToolCalls>
| Method | Description |
| --- | --- |
| constructor(options) | Create a client. Accepts a Worker, URL, or string path for the worker. |
| init() | Load the model. Resolves when ready, rejects on error. |
| execute(options) | Run inference. Returns { rawOutput, toolCalls }. |
| cancel() | Abort the current inference. |
| dispose() | Terminate the worker and release resources. |
| status | Current worker status: "idle" | "loading" | "ready" | "error". |
WorkerInitConfig
| Field | Type | Default | Description |
| --- | --- | --- | --- |
| modelId | string | required | HuggingFace model ID or path |
| modelClass | string | "AutoModelForCausalLM" | Auto model class name |
| tokenizerClass | string | "AutoTokenizer" | Tokenizer class name |
| dtype | string | - | Quantization type ("q4", "fp16", "q8") |
| device | string | - | Compute device ("webgpu", "wasm", "cpu") |
| maxRetryAttempts | number | 3 | Retry count for network errors |
| baseRetryDelayMs | number | 1000 | Base delay for exponential backoff |
ExecuteOptions
| Field | Type | Description |
| --- | --- | --- |
| prompt | string | The user prompt (appended as the final message) |
| messages | ChatMessage[] | Conversation history |
| systemPrompt | string | System instruction |
| tools | unknown[] | Tool schemas for the chat template |
| generationParams | GenerationParams | Generation config (maxNewTokens, temperature, etc.) |
Callbacks
Pass these in InferenceClientOptions:
onStatus(status, error?)- Status changes during init/executiononProgress(progress)- Model download progress (file name, bytes loaded/total)onError(error)- Error messages from the workeronRetry(attempt, maxAttempts)- Retry attempts on network failure
