@verifyfetch/webllm
v1.1.1
Published
Verified, resumable model loading for WebLLM. Integrity verification for AI models in the browser.
Maintainers
Readme
The Problem
Loading a 4GB AI model in the browser. Network drops at 3.8GB. Start over.
This package adds:
- Integrity verification - Detect corrupted/tampered models before they run
- Resumable downloads - Network fails at 80%? Resume from 80%, not 0%
- Chunked verification - Detect corruption at first bad chunk, don't download everything
Install
npm install @verifyfetch/webllm @mlc-ai/web-llmQuick Start
Option 1: VerifiedMLCEngine (drop-in replacement)
import { VerifiedMLCEngine } from '@verifyfetch/webllm';
const engine = new VerifiedMLCEngine({
verification: {
manifestUrl: '/models/vf.manifest.json'
},
initProgressCallback: (report) => {
// Shows: "Verifying Phi-3: 45% (resumed)" then "Loading Phi-3: 80%"
console.log(report.text);
}
});
// Load model - verification happens automatically
await engine.reload('Phi-3-mini-4k-instruct-q4f16_1-MLC');
// Use normally
const response = await engine.chat.completions.create({
messages: [{ role: 'user', content: 'What is 2+2?' }]
});Option 2: Preloader (explicit control)
import { preloadVerifiedModel } from '@verifyfetch/webllm';
import { MLCEngine } from '@mlc-ai/web-llm';
// Pre-download with verification
await preloadVerifiedModel('Phi-3-mini-4k-instruct-q4f16_1-MLC', {
manifestUrl: '/models/vf.manifest.json',
onProgress: ({ file, percent, resumed }) => {
console.log(`${file}: ${percent}%${resumed ? ' (resumed)' : ''}`);
}
});
// Now use standard WebLLM - model is cached
const engine = new MLCEngine();
await engine.reload('Phi-3-mini-4k-instruct-q4f16_1-MLC');Model Manifest Format
Create a manifest with hashes for your model files:
{
"version": 2,
"models": {
"Phi-3-mini-4k-instruct-q4f16_1-MLC": {
"baseUrl": "https://huggingface.co/mlc-ai/Phi-3-mini-4k-instruct-q4f16_1-MLC/resolve/main/",
"files": {
"mlc-chat-config.json": {
"sri": "sha256-abc123..."
},
"params_shard_0.bin": {
"sri": "sha256-full...",
"size": 536870912,
"chunked": {
"root": "sha256-root...",
"chunkSize": 1048576,
"hashes": ["sha256-c0...", "sha256-c1...", "..."]
}
}
}
}
}
}Generate it with the CLI:
npx @verifyfetch/cli hash-model Phi-3-mini-4k-instruct-q4f16_1-MLCAPI
VerifiedMLCEngine
Drop-in replacement for WebLLM's MLCEngine with verification.
new VerifiedMLCEngine({
verification: {
manifestUrl?: string, // URL to fetch manifest
manifest?: Manifest, // Or inline manifest
onFail?: 'block' | 'warn', // Default: 'block'
resumable?: boolean, // Default: true
},
initProgressCallback?: (report) => void,
})preloadVerifiedModel
Pre-download and verify a model before WebLLM loads it.
const result = await preloadVerifiedModel(modelId, {
manifestUrl?: string,
manifest?: Manifest,
onProgress?: (progress) => void,
onFail?: 'block' | 'warn',
resumable?: boolean,
});Utilities
// Check if model is already cached
const cached = await isModelCached(modelId, { manifest });
// Clear cached model
await clearModelCache(modelId, { manifest });
// Get download progress for partial download
const progress = await getPreloadProgress(modelId, { manifest });How It Works
- Pre-download with verification - Files are downloaded and verified against SRI hashes
- Cache in web-llm's cache - Verified files are stored in
webllm/model,webllm/config,webllm/wasmcaches - WebLLM finds cached files - When WebLLM loads the model, it finds files already in cache
- No re-download needed - WebLLM uses the pre-verified cached files
This means verification happens before WebLLM touches the data, and WebLLM benefits from cached files without modification.
Why This Exists
WebLLM issue #761 requests integrity verification for model loading. This package provides that today, without waiting for upstream changes.
Testing
The package includes comprehensive tests covering unit tests, real network integration, and browser-based WebGPU testing.
Unit & Integration Tests
# Run all tests (85 tests)
pnpm test
# Run with verbose output
pnpm test -- --runTest coverage includes:
- Manifest validation and model entry handling (30 tests)
- Preloader with cache operations (13 tests)
- VerifiedMLCEngine wrapper (12 tests)
- Real HuggingFace integration - downloads actual model files (18 tests)
- Real WebLLM model files - verifies tokenizer, config, ndarray-cache (12 tests)
Browser Test (WebGPU)
For full end-to-end testing with actual WebLLM inference:
# Start the test server
pnpm test:browser
# Open in Chrome/Edge (WebGPU required)
# http://localhost:3000/browser-test.htmlThe browser test:
- Verifies WebGPU availability
- Downloads real model files from HuggingFace
- Computes and verifies SHA-256 hashes in-browser
- Tests Cache API storage
- (Optional) Loads full 2GB model and runs inference
Related
- verifyfetch - Core library
- @verifyfetch/cli - Generate manifests
- GitHub
License
Apache-2.0
