latentimg
v0.0.6
Published
Lightweight browser-side encoder/decoder for images using ONNX
Maintainers
Readme
🖼 latentimg
🚧 Currently in early testing phase. Expect breaking changes.
latentimg is a lightweight browser-side image encoder/decoder using ONNX + WebAssembly.
This library is designed to reduce reliance on traditional file servers and minimize network transmission overhead. Instead of transmitting entire image files, latentimg compresses images into compact latent vectors that can be easily stored or sent as strings. This makes it especially useful in contexts where lightweight transmission of visual content is desired, such as previews, prototypes, or generative applications.
⚠️ Note: Since this process involves lossy compression, the reconstructed image may not be identical to the original. It is not recommended for services that require pixel-perfect accuracy (e.g., ID verification, medical imaging, document scans).
🚀 Features
- 💡 In-browser encoding & decoding with ONNXRuntime Web
- ⚡ WebAssembly backend, CDN-based loading (no need to ship large models)
- 🔧 Supports custom model URLs
- 🔥 TypeScript-first with minimal API
📦 Installation
# with npm
npm install latentimg
# or with yarn
yarn add latentimg⚠️
onnxruntime-webis a peer dependency. You must install it manually:
yarn add onnxruntime-web🧪 Usage
import { encodeImage, decodeLatent } from "latentimg";
const imageDataUrl = await loadImageAsDataURL();
const latent = await encodeImage(imageDataUrl); // Float32Array
const reconstructed = await decodeLatent(latent); // base64 string
console.log(reconstructed); // can be used as <img src="..." />🖼 Example (React)
<input type="file" onChange={handleFileUpload} />
<img src={originalImage} />
<img src={decodedImage} />Need help loading image as base64? See utils in
/examples
⚙️ Custom Model
await encodeImage(imageUrl, {
encoderUrl: "https://your-cdn.com/encoder.onnx",
});
await decodeLatent(latent, {
decoderUrl: "https://your-cdn.com/decoder.onnx",
});By default, the encoder and decoder are loaded from:
https://cdn.jsdelivr.net/gh/parkchangwoo1/latentimg-models@latest/encoder.onnxhttps://cdn.jsdelivr.net/gh/parkchangwoo1/latentimg-models@latest/decoder.onnx
You can override these via options.
📁 CDN Setup (if you need custom models)
- Upload your
.onnxfiles to a public GitHub repo - Access them via jsDelivr:
https://cdn.jsdelivr.net/gh/user/repo@version/path/to/model.onnx
