@ismail-kattakath/mediapipe-react
v0.2.0
Published
<div align="center">
Readme
@ismail-kattakath/mediapipe-react
Production-ready React hooks for MediaPipe AI tasks — GenAI, Vision, and Audio.
Features
- 🧊 React-first API: Clean, hooks-based interface
- 🚀 Next.js Optimized: Built-in SSR safety and App Router support
- 📦 Tree-shakable Subpaths: Only bundle what you use (e.g.,
/genai) - 🛠️ Fully Typed: Written in TypeScript for excellent DX
- ⚡ Web Worker Support: Heavy inference runs in background threads
Installation
pnpm add @ismail-kattakath/mediapipe-react
# or
npm install @ismail-kattakath/mediapipe-react
# or
yarn add @ismail-kattakath/mediapipe-react[!NOTE] You'll also need to install the specific MediaPipe task package for the features you use:
pnpm add @mediapipe/tasks-genai # For GenAI features
Quick Start
Vite / Vanilla React
// main.tsx
import { StrictMode } from "react";
import { createRoot } from "react-dom/client";
import { MediaPipeProvider } from "@ismail-kattakath/mediapipe-react";
import App from "./App";
createRoot(document.getElementById("root")!).render(
<StrictMode>
<MediaPipeProvider>
<App />
</MediaPipeProvider>
</StrictMode>,
);// App.tsx
import { useLlm } from "@ismail-kattakath/mediapipe-react/genai";
export default function App() {
const { generate, output, isLoading } = useLlm({
modelPath: "/models/gemma-2b-it-gpu-int4.bin",
});
return (
<div>
<button
onClick={() => generate("Explain React hooks")}
disabled={isLoading}
>
Generate
</button>
<pre>{output}</pre>
</div>
);
}Next.js (App Router)
// app/layout.tsx
import { MediaPipeProvider } from "@ismail-kattakath/mediapipe-react";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body>
<MediaPipeProvider>{children}</MediaPipeProvider>
</body>
</html>
);
}// app/components/ChatBox.tsx
"use client";
import { useLlm } from "@ismail-kattakath/mediapipe-react/genai";
export default function ChatBox() {
const { generate, output, isLoading } = useLlm({
modelPath: "/models/gemma-2b-it-gpu-int4.bin",
});
return (
<div>
<button onClick={() => generate("Hello!")} disabled={isLoading}>
Send
</button>
<p>{output}</p>
</div>
);
}Subpath Strategy
This library uses subpath exports to keep your bundle size minimal. Import only the features you need:
| Subpath | Purpose | Example |
| ------------------------------------------ | ----------------------------- | ------------------------------------------------------------------------------ |
| @ismail-kattakath/mediapipe-react | Core provider and utilities | import { MediaPipeProvider } from "@ismail-kattakath/mediapipe-react" |
| @ismail-kattakath/mediapipe-react/genai | LLM inference and GenAI hooks | import { useLlm } from "@ismail-kattakath/mediapipe-react/genai" |
| @ismail-kattakath/mediapipe-react/vision | Vision tasks (planned) | import { useHandTracking } from "@ismail-kattakath/mediapipe-react/vision" |
| @ismail-kattakath/mediapipe-react/audio | Audio tasks (planned) | import { useAudioClassifier } from "@ismail-kattakath/mediapipe-react/audio" |
[!TIP] Using subpaths ensures that importing
/genaiwon't bundle vision or audio code, reducing your final bundle size.
API Reference
Core
MediaPipeProvider
The root provider that supplies configuration to all MediaPipe hooks.
Props:
| Prop | Type | Default | Description |
| ----------- | ------------------- | ----------- | ----------------------------------- |
| wasmPath | string (optional) | undefined | Custom path to MediaPipe WASM files |
| modelPath | string (optional) | undefined | Default model path for all hooks |
| children | ReactNode | — | Your app components |
Example:
<MediaPipeProvider wasmPath="/wasm" modelPath="/models/default.bin">
<App />
</MediaPipeProvider>useMediaPipeContext
Access the MediaPipe context from any child component.
Returns:
{
wasmPath?: string;
modelPath?: string;
}Example:
import { useMediaPipeContext } from "@ismail-kattakath/mediapipe-react";
function MyComponent() {
const { modelPath } = useMediaPipeContext();
return <div>Model path: {modelPath}</div>;
}GenAI
useLlm
Hook for LLM inference with Web Worker orchestration.
Parameters:
interface UseLlmOptions {
modelPath: string; // Path to .bin or .task model file
maxTokens?: number; // Max tokens to generate (default: 512)
temperature?: number; // Sampling temperature (default: 0.8)
topK?: number; // Top-K sampling (default: 40)
randomSeed?: number; // Random seed for reproducibility
}Returns:
{
generate: (prompt: string) => void;
output: string; // Generated text
isLoading: boolean; // Whether inference is running
progress: number; // Progress percentage (0-100)
error: string | null; // Error message if any
}Example:
import { useLlm } from "@ismail-kattakath/mediapipe-react/genai";
function ChatInterface() {
const { generate, output, isLoading, error } = useLlm({
modelPath: "/models/gemma-2b-it-gpu-int4.bin",
maxTokens: 1024,
temperature: 0.7,
});
const handleSubmit = (prompt: string) => {
generate(prompt);
};
if (error) return <div>Error: {error}</div>;
return (
<div>
<textarea onChange={(e) => handleSubmit(e.target.value)} />
{isLoading && <p>Generating...</p>}
<pre>{output}</pre>
</div>
);
}Vision
useFaceLandmarker
Hook for real-time face landmark detection.
Parameters:
interface UseFaceLandmarkerOptions {
modelPath?: string; // Path to face_landmarker.task
wasmPath?: string; // Path to wasm files
}Returns:
{
detect: (input: HTMLVideoElement | HTMLCanvasElement | ImageData, timestamp: number) => void;
results: { faceLandmarks: { x: number; y: number; z: number }[][] } | null;
isLoading: boolean;
error: string | null;
}Example:
import {
useFaceLandmarker,
drawLandmarks,
} from "@ismail-kattakath/mediapipe-react/vision";
import { useEffect, useRef } from "react";
function FaceTracker() {
const videoRef = useRef<HTMLVideoElement>(null);
const canvasRef = useRef<HTMLCanvasElement>(null);
const { detect, results, isLoading } = useFaceLandmarker();
const processFrame = () => {
if (videoRef.current && canvasRef.current) {
const now = performance.now();
detect(videoRef.current, now);
const ctx = canvasRef.current.getContext("2d");
if (ctx) {
ctx.clearRect(0, 0, canvasRef.current.width, canvasRef.current.height);
drawLandmarks(canvasRef.current, results);
}
requestAnimationFrame(processFrame);
}
};
return (
<>
<video ref={videoRef} autoPlay playsInline />
<canvas ref={canvasRef} />
</>
);
}drawLandmarks
Utility to draw face landmarks on a canvas.
drawLandmarks(canvas: HTMLCanvasElement, results: unknown): voidNext.js / SSR Compatibility
This library is fully compatible with Next.js App Router and Server-Side Rendering.
How It Works
All hooks include an isBrowser() guard that prevents MediaPipe initialization during server-side rendering:
// Internal implementation
function isBrowser(): boolean {
return typeof window !== "undefined";
}This means:
- ✅ No "window is not defined" errors
- ✅ No hydration mismatches
- ✅ Works with React Server Components (when used in Client Components)
Best Practices
- Always use
"use client"directive when using MediaPipe hooks:
"use client";
import { useLlm } from "@ismail-kattakath/mediapipe-react/genai";
export default function MyComponent() {
// Your code here
}- Wrap your app with
MediaPipeProviderin a Client Component:
// app/providers.tsx
"use client";
import { MediaPipeProvider } from "@ismail-kattakath/mediapipe-react";
export function Providers({ children }: { children: React.ReactNode }) {
return <MediaPipeProvider>{children}</MediaPipeProvider>;
}// app/layout.tsx
import { Providers } from "./providers";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html>
<body>
<Providers>{children}</Providers>
</body>
</html>
);
}- Serve model files from the
public/directory:
public/
└── models/
├── gemma-2b-it-gpu-int4.bin
└── llama-3-8b.taskThen reference them with absolute paths:
const { generate } = useLlm({
modelPath: "/models/gemma-2b-it-gpu-int4.bin",
});Advanced Usage
Custom WASM Path
If you're hosting MediaPipe WASM files on a CDN:
<MediaPipeProvider wasmPath="https://cdn.example.com/mediapipe/wasm">
<App />
</MediaPipeProvider>Error Handling
const { generate, error } = useLlm({ modelPath: "/models/model.bin" });
useEffect(() => {
if (error) {
console.error("LLM Error:", error);
// Show user-friendly error message
}
}, [error]);Progress Tracking
const { generate, progress, isLoading } = useLlm({
modelPath: "/models/model.bin",
});
return <div>{isLoading && <progress value={progress} max={100} />}</div>;Troubleshooting
"Failed to load model"
- Ensure the model file exists at the specified path
- Check that the file is served with correct MIME type
- Verify the model format is compatible (
.binor.task)
"Worker initialization failed"
- Ensure your bundler supports Web Workers
- For Vite, no additional config needed
- For Next.js, ensure you're using Next.js 13+ with App Router
Bundle size is too large
- Use subpath imports:
@ismail-kattakath/mediapipe-react/genaiinstead of the root import - Ensure tree-shaking is enabled in your bundler
- Check that you're not importing unused subpaths
TypeScript Support
This library is written in TypeScript and includes full type definitions. No additional @types packages needed.
import type { UseLlmOptions } from "@ismail-kattakath/mediapipe-react/genai";
const config: UseLlmOptions = {
modelPath: "/models/model.bin",
maxTokens: 512,
};Contributing
This package is part of a monorepo. For contribution guidelines, see the main repository README.
License
MIT © Ismail Kattakath
