@mostajs/media
v1.12.0
Published
Image and video capture, screen recording, client-side video editor (ffmpeg.wasm), and upload components for React/Next.js
Maintainers
Readme
@mostajs/media
v1.12.0 — Full-stack media toolkit : capture, edit, subtitle, export, publish. Screen recorder + audio-only recorder → live preview (PiP webcam, swap, VU meter) → video editor (split, speed, stickers, subtitles, multi-track audio, logo watermark, ad slots) → server-side ffmpeg export (mp4/gif/webm) → project persistence via
@mostajs/orm+ SQLite → direct YouTube publish.
Image capture, video capture, screen recording (with webcam baked in via canvas compositing), audio-only recording, in-browser video editor with subtitles, sticker overlays, multi-track audio chain, logo / signature watermark, ad-slot reservations, server-side ffmpeg export, and direct YouTube upload — all in one package for React / Next.js.
What's new since v1.10
- v1.12.0 — Multi-camera support :
useScreenCapture()exposescameras/selectedCameraId/refreshCameras/switchCamera. Live in-recorder hot-swap (USB cam, OBS Virtual Cam, capture cards…) without restarting theMediaRecorder.<RecorderPreview />shows a camera<select>overlay when more than one device is detected. Runtime defaults viaSTUDIO_CAMERA_DEVICE_ID/STUDIO_CAMERA_LABEL_HINT.devicechangelistener picks up hotplug live. - v1.11.24 — Logo / signature watermark : sidebar editor + per-export ffmpeg overlay, 4 corner positions, size / opacity / margin sliders, runtime defaults via
STUDIO_LOGO_*env vars. - v1.11.18 → v1.11.21 — Multi-track audio chain : add several tracks (e.g. présentation → nostalgie → action → conquête), per-track start / end timing, ffmpeg
amix+adelay+atrimat export, live<audio>overlay synchronized with playback. - v1.11.21 —
<YouTubePublishButton />: OAuth2 implicit flow (Google Identity Services) + YouTube Data API v3 resumable upload, runs entirely client-side. Ad-slot reservations (preroll / midroll / postroll / banner) saved as JSON sidecar — ready for AdMob / Facebook Ads / custom servers. - v1.11.16 — Paired naming for separate audio + video : shared
baseId(rec-<ts>-<rand>) →<id>-video.webmand<id>-audio.webmcorrelate at upload. - v1.11.9 → v1.11.10 — Generic chunk storage for both audio and video, default IndexedDB (crash-safe, constant RAM). 4 strategies :
memory,indexeddb,filesystem(FS Access API),server(POST chunks).videoOnlyflag forces a true audio / video separation (no audio track in the video file). - v1.11.7 → v1.11.8 — Audio-only mode (no screen capture) via
<AudioOnlyRecorder />. Runtime configuration through@mostajs/configserver wrapper : edit.env, restart PM2, no Next.js rebuild required. - v1.11.4 → v1.11.6 — Webcam baked into the recording via canvas compositing (no PiP overlay loss), camera ↔ screen swap toggle, real-time VU meter (
<AudioLevelMeter />). - v1.10.x — Production-ready scaffold under
examples/studio-app/: Next.js 16 + Turbopack app,start-studio.sh/deploy.sh/install.sh/update.sh, Apache vhost template, PM2 ecosystem config, light theme aligned with the orm.amia.fr palette.
A live demo is deployed at https://media.amia.fr/.
Installation
npm install @mostajs/media lucide-reactOptional peer dependencies
| Package | When needed |
|---|---|
| @ffmpeg/ffmpeg @ffmpeg/util | Client-side export via ffmpeg.wasm (VideoEditor without server) |
| @mostajs/orm better-sqlite3 | Project persistence (server-side, SQLite) |
| next | Server routes (/api/compose, /api/projects) |
# Server-side export + project persistence (recommended) :
npm install @mostajs/orm better-sqlite3
# Client-side export only (no server needed but slower + COEP required) :
npm install @ffmpeg/ffmpeg @ffmpeg/utilInstall ffmpeg (required for server-side export)
The /api/compose route calls ffmpeg via child_process.spawn. It must be on the system PATH.
Ubuntu / Debian :
sudo apt update && sudo apt install -y ffmpeg
ffmpeg -version # verify : ffmpeg version 6.x+macOS (Homebrew) :
brew install ffmpegWindows (Chocolatey) :
choco install ffmpegDocker :
RUN apt-get update && apt-get install -y ffmpegVerify :
which ffmpeg && ffmpeg -version | head -1
# expected : /usr/bin/ffmpeg ffmpeg version 6.0+Getting started
Quickest path — built-in starter kit (recommended, v1.10.1+)
The package ships with a complete production-ready scaffold in examples/studio-app/ — Next.js app + dev/deploy/install/update scripts + Apache vhost template + PM2 ecosystem config.
# 1. Install package (cwd doesn't matter — temporary)
mkdir tmp && cd tmp && npm init -y && npm install @mostajs/media
# 2. Copy the scaffold to your project location
cp -r node_modules/@mostajs/media/examples/studio-app ~/my-media-studio
cd ~/my-media-studio
rm -rf ../tmp
# 3. Install ffmpeg (server-side export)
sudo apt install ffmpeg # or: brew install ffmpeg
# 4. Launch (auto-handles npm install)
./start-studio.sh
# → http://localhost:4499The scaffold includes :
| Script | Purpose |
|---|---|
| start-studio.sh | Local dev launcher (port 4499, ffmpeg pre-flight checks) |
| deploy.sh | rsync source + ecosystem to remote server |
| install.sh | First-time remote setup (Apache vhost + Let's Encrypt + PM2) |
| update.sh | npm install + build + pm2 restart on remote |
Read examples/studio-app/README.md for full deploy instructions, browser requirements, and customization options (different page entries, server-only ffmpeg, NextAuth integration).
Manual setup (legacy, build your own)
If you'd rather assemble files yourself rather than using the scaffold :
mkdir my-media-studio && cd my-media-studio
npm init -y
npm install next react react-dom @mostajs/media @mostajs/orm better-sqlite3 lucide-react
npm install @ffmpeg/ffmpeg @ffmpeg/util
@ffmpeg/ffmpeg+@ffmpeg/utilare required even with server-side export — the VideoEditor uses them for client-side preview features (duration probing, format detection).
2. Create the files (5 files, ~1 line each)
package.json — add the dev script :
{
"type": "module",
"scripts": {
"dev": "next dev -p 4499"
}
}next.config.mjs :
export default { reactStrictMode: true }app/layout.tsx :
export default function Layout({ children }: { children: React.ReactNode }) {
return <html><body style={{ margin: 0, background: '#0f172a' }}>{children}</body></html>
}app/page.tsx — the full capture + editor + project manager :
export { default } from '@mostajs/media/pages/CaptureEditorPage'app/api/compose/route.ts — ffmpeg video assembly :
export { POST } from '@mostajs/media/server/compose-route'
export const runtime = 'nodejs'
export const maxDuration = 300app/api/projects/route.ts — list + create projects :
export { GET, POST } from '@mostajs/media/server/projects-list-route'
export const runtime = 'nodejs'app/api/projects/[id]/route.ts — get + update + delete :
export { GET, PUT, DELETE } from '@mostajs/media/server/projects-id-route'
export const runtime = 'nodejs'3. Start
npm run dev
# → http://localhost:44994. Use
Open http://localhost:4499 in Chrome or Firefox :
- Record — click "Start recorder", share a window, record, stop
- Open file — load an existing .mp4 / .webm / .gif
- Create from images — select multiple screenshots → slideshow
- Edit — split, reorder, speed, insert images, add stickers (❤️🙂➡️⚠️🚫🔵✅), subtitles (multi-language)
- Export — MP4 (instant), GIF (optimized), WebM — server-side ffmpeg, no COEP hassle
- Save project — 💾 persisted in
./data/projects.sqlitevia@mostajs/orm - Reload later — saved projects appear on the home page, click to resume editing
Recommended export settings
| Usage | Duration | Format | Width | Target size |
|---|---|---|---|---|
| GitHub README | 30-45s | MP4 <video> | 800-1200 | 1-3 MB |
| Landing page | 45-90s | MP4 | 1200 | 2-5 MB |
| npm inline | 15-30s | GIF 800px | 800 | < 5 MB |
| YouTube | 2 min+ | MP4 | 1920 | no limit |
| Twitter / X | any | MP4 | 1200 | < 15 MB |
Tip : GIF > 45s at 1200px = very slow + huge files. Use MP4 with
<video>tag — supported everywhere (GitHub, npm, all browsers).
How to use — Media (components & hooks)
Quick start — Screen capture + edit + export
'use client'
import { useScreenCapture } from '@mostajs/media/hooks/useScreenCapture'
import VideoEditor from '@mostajs/media/components/VideoEditor'
import { useState } from 'react'
export default function Page() {
const screen = useScreenCapture()
const [blob, setBlob] = useState<Blob | null>(null)
if (!blob) {
return <>
<video ref={screen.videoRef} autoPlay muted />
<button onClick={() => screen.startScreenShare()}>Share screen</button>
<button onClick={() => screen.startRecording()}>Record</button>
<button onClick={async () => {
const r = await screen.stopRecording()
if (r) setBlob(r.blob)
}}>Stop & edit</button>
</>
}
return <VideoEditor
source={blob}
defaultFormat="mp4"
exportUrl="/api/compose"
onExport={(r) => console.log('Exported:', r.filename)} />
}<RecorderPreview /> (v1.11.4+)
Live preview during recording — screen feed with webcam picture-in-picture overlay (or swapped : webcam main + screen PiP). The webcam track is always mounted (display toggled via CSS) to avoid hydration race conditions.
import RecorderPreview from '@mostajs/media/components/RecorderPreview'
<RecorderPreview
screenStream={screen.screenStream}
webcamStream={screen.webcamStream}
swapped={screen.swapped}
onSwap={() => screen.toggleSwap()}
/>The webcam is baked into the recording via canvas compositing : requestAnimationFrame loop draws screen + webcam onto a hidden <canvas>, canvas.captureStream(30) feeds the MediaRecorder. The exported video is a single track — no separate webcam file.
When more than one video input is detected, a small camera <select> is overlaid on the preview (top-left). Picking another option calls screen.switchCamera(deviceId) which replaces the video track in-place — the MediaRecorder does not restart, no chunks are dropped, the audio (mic) keeps streaming. Pass showCameraPicker={false} to hide the overlay if you want to manage the UI yourself.
<AudioLevelMeter /> (v1.11.5+)
32-bar VU meter rendered onto a <canvas> via requestAnimationFrame + Web Audio AnalyserNode. Useful during recording to confirm the mic is live.
import AudioLevelMeter from '@mostajs/media/components/AudioLevelMeter'
<AudioLevelMeter analyser={screen.audioAnalyser} bars={32} /><AudioOnlyRecorder /> (v1.11.7+)
Standalone audio-only mode — bypasses screen capture entirely. Internal AudioContext + AnalyserNode + canvas VU meter, supports the same 4 storage strategies (memory / indexeddb / filesystem / server).
import AudioOnlyRecorder from '@mostajs/media/components/AudioOnlyRecorder'
<AudioOnlyRecorder
storage="indexeddb"
serverUrl="/api/record/audio"
onComplete={(r) => setAudioBlob(r.blob)}
/><YouTubePublishButton /> (v1.11.21+)
OAuth2 implicit flow (Google Identity Services) + YouTube Data API v3 resumable upload, fully client-side. Reads NEXT_PUBLIC_STUDIO_YOUTUBE_CLIENT_ID from .env. Disabled with tooltip when the client ID is empty.
import YouTubePublishButton from '@mostajs/media/components/YouTubePublishButton'
<YouTubePublishButton
blob={exportedBlob}
filename="my-video.mp4"
title="My Studio Recording"
description="Made with @mostajs/media"
privacy="unlisted" // 'public' | 'unlisted' | 'private'
/>Setup :
- https://console.cloud.google.com → new project
- Enable YouTube Data API v3
- Credentials → OAuth client ID → Web application
- Authorized JavaScript origins :
http://localhost:4499+ your prod domain - Set
NEXT_PUBLIC_STUDIO_YOUTUBE_CLIENT_ID=<client-id>in.env
<ImageCapture />
Webcam + file upload + screen capture (single frame).
import { ImageCapture } from '@mostajs/media'
<ImageCapture
photo={photo}
onCapture={(dataUrl) => setPhoto(dataUrl)}
onClear={() => setPhoto('')}
allowUpload maxWidth={800} maxHeight={800} quality={0.85}
/><VideoCapture />
Record video from webcam or screen with webcam overlay (picture-in-picture).
import { VideoCapture } from '@mostajs/media'
<VideoCapture
onCapture={(blob, url) => uploadVideo(blob)}
maxDuration={120}
/><ScreenRecorder />
Floating FAB widget — users trigger screen capture at any time for feedback/bug reports.
import { ScreenRecorder } from '@mostajs/media/components/ScreenRecorder'
<ScreenRecorder
onScreenshot={(img) => submitFeedback({ screenshot: img })}
onRecording={(blob) => uploadRecording(blob)}
uploadEndpoint="/api/feedback/media"
webcamOverlay position="bottom-right" maxDuration={300}
/><VideoEditor />
Post-recording video editor with timeline, sticker overlays, subtitles, and export.
import VideoEditor from '@mostajs/media/components/VideoEditor'
<VideoEditor
source={blob} // webm/mp4 Blob or URL
defaultFormat="mp4" // 'mp4' | 'webm' | 'gif'
defaultWidth={1200}
exportUrl="/api/compose" // server-side ffmpeg (recommended)
initialClips={savedProject?.clips}
initialSubtitles={savedProject?.subtitles}
initialBurnSubtitles={savedProject?.settings?.burnSubtitles}
initialBurnLang={savedProject?.settings?.burnLang}
onSaveRequested={async (project) => {
await fetch('/api/projects', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ name: 'My project', data: project }),
})
}}
onExport={(r) => console.log(r.filename, r.blob.size)}
/>Features :
| Feature | Description | |---|---| | Timeline | Click-to-select, drag-to-reorder, × to delete clips | | ✂ Split | Split at current playhead position | | 🖼 Insert image | Upload image between clips, held for N seconds | | 🎬 Create from images | Select N screenshots → each becomes a clip → slideshow → export | | 📂 Open existing | Load .mp4 / .webm / .gif for editing | | Sticker tray | ❤️ 🙂 ➡️ ⚠️ 🚫 🔵 ✅ — click to add, drag to move, double-click to remove | | Per-overlay | Size (32–256px), start/end seconds, position | | Speed | 0.25× · 0.5× · 1× · 1.5× · 2× · 3× · 4× per clip | | Trim | srcStartSec / srcEndSec for video clips, duration for image clips | | Subtitles | Multi-language (EN/FR/AR/ES/DE/PT/ZH/JA), per-subtitle text/timing/size/color/position | | Import SRT | Load existing .srt/.vtt → parsed into subtitle entries | | Export SRT/VTT | Download subtitles per language as .srt or .vtt | | Burn subtitles | Checkbox → subtitles hardcoded into video via ffmpeg drawtext | | 💾 Save project | Serialize clips + subtitles + settings to ORM/SQLite or JSON file | | Export | MP4 (instant, recommended), WebM (VP8 re-encode), GIF (2-pass palette, ≤45s recommended) |
Headless variant — useVideoEditor() :
import { useVideoEditor } from '@mostajs/media/hooks/useVideoEditor'
const ed = useVideoEditor({ source: blob, exportUrl: '/api/compose' })
// Clips & timeline
ed.clips // Clip[]
ed.splitAtCurrent() // ✂ at playhead
ed.insertImage(file, 3) // 🖼 3s duration
ed.moveClip(from, to) // drag-reorder
ed.deleteClip(index)
ed.setSpeed(2) // 2× on selected clip
ed.trimSelected({ srcStartSec: 5, srcEndSec: 20 })
ed.setDuration(5) // image clip duration
// Overlays
ed.addOverlay('heart')
ed.updateOverlay(id, { xFrac: 0.3, yFrac: 0.7 })
ed.removeOverlay(id)
// Subtitles
ed.addSubtitle({ text: 'Hello', lang: 'en', startSec: 0, endSec: 3,
fontSize: 48, color: '#FFFFFF', bgColor: '#000000AA', position: 'bottom' })
ed.updateSubtitle(id, { text: 'Updated' })
ed.removeSubtitle(id)
ed.downloadSRT('en') // → subtitles-en.srt
ed.downloadVTT('fr') // → subtitles-fr.vtt
// Project persistence
const data = await ed.getProjectData({ format: 'mp4', width: 1200, burnSubtitles: true, burnLang: 'en' })
ed.setProjectData(savedProject)
ed.saveProjectJSON({ format: 'mp4', width: 1200 }) // → download .mostaproj.json
ed.loadProjectJSON(file)
// Export
const result = await ed.exportClips({ format: 'mp4', width: 1200, burnSubtitles: true, burnLang: 'en' })
// result = { blob, url, filename, mime, durationSec }<ImageEditor />
Edit captured images : rotate, flip, brightness, contrast, crop.
import { ImageEditor } from '@mostajs/media'
<ImageEditor
src={photo}
onSave={(edited) => setPhoto(edited)}
onCancel={() => setEditing(false)}
tools={['crop', 'rotate', 'brightness', 'contrast', 'flip']}
/><MediaGallery />
Grid gallery with lightbox.
import { MediaGallery } from '@mostajs/media'
<MediaGallery
items={[
{ id: '1', url: '/photos/1.jpg', type: 'image', name: 'Photo 1' },
{ id: '2', url: '/videos/demo.webm', type: 'video', thumbnail: '/thumbs/demo.jpg' },
]}
columns={3} deletable
onSelect={(item) => console.log('Selected:', item)}
onDelete={(item) => deleteMedia(item.id)}
/>Recording — separate audio / video, chunk storage
useScreenCapture() (v1.11.7+) supports recording the mixed audio as a separate file in addition to the video, and choosing where the chunks are stored.
const screen = useScreenCapture()
await screen.startRecording({
recordAudioSeparately: true, // also produce an audio-only file
audioStorage: 'indexeddb', // memory | indexeddb | filesystem | server
audioServerUrl: '/api/record/audio',
videoStorage: 'indexeddb', // (v1.11.9+) same 4 options for video
videoServerUrl: '/api/record/video',
videoOnly: false, // (v1.11.10+) strip audio track from the video file
})
const r = await screen.stopRecording()
// r.blob — main video (back-compat)
// r.video.blob — main video, dedicated handle (v1.11.16+)
// r.audio?.blob — separate audio file (when recordAudioSeparately = true)
// r.video.url — object URL for previewStorage strategies
| Strategy | RAM | Crash-safe | Browser support | Notes |
|---|---|---|---|---|
| memory | linear (~60 MB / min @ 1080p) | no | all | Simple, default in legacy ≤1.11.8 |
| indexeddb | constant | yes | all | Default v1.11.9+ — chunks streamed to IndexedDB, reconstructed on stop |
| filesystem | constant | yes | Chrome / Edge | File System Access API picker → chunks streamed to disk |
| server | constant | yes | all | POST chunks to audioServerUrl / videoServerUrl |
Paired naming (v1.11.16+)
When recordAudioSeparately = true, both files share a baseId :
rec-1714356821-9f3c-video.webm
rec-1714356821-9f3c-audio.webmThis makes it trivial to correlate the two artefacts on upload or post-processing.
Chunk storage helpers
import {
reconstructFromIndexedDB,
listIndexedDBSessions,
deleteIndexedDBSession,
} from '@mostajs/media/lib/audio-storage'
const sessions = await listIndexedDBSessions('audio') // recoverable sessions
const blob = await reconstructFromIndexedDB('rec-..-9f3c', 'video')
await deleteIndexedDBSession('rec-..-9f3c', 'audio')Server-side chunk endpoints
The scaffold under examples/studio-app/app/api/record/<audio|video>/route.ts provides the matching POST endpoints — one file appended chunk by chunk to ./uploads/.
Multi-camera + live switching (v1.12.0+)
useScreenCapture() enumerates every video input (built-in webcam, USB cam, OBS Virtual Cam, capture cards…) and lets you swap on the fly during recording without restarting the MediaRecorder.
const screen = useScreenCapture()
// Pre-select via deviceId or label hint
await screen.startScreenShare({
webcamOverlay: true,
cameraDeviceId: '<deviceId>', // exact match (preferred when known)
cameraLabelHint: 'Logitech', // fallback (case-insensitive substring)
})
// Or let the user pick at runtime — RecorderPreview already wires it
<RecorderPreview screen={screen} />
// Programmatic switch
await screen.switchCamera(screen.cameras[1].deviceId)
// Refresh after a hotplug (also done automatically via 'devicechange')
const list = await screen.refreshCameras()Returned API :
| Field / method | Purpose |
|---|---|
| cameras: MediaDeviceInfo[] | All videoinput devices ; labels populated once permission granted |
| selectedCameraId: string \| null | deviceId of the active camera |
| refreshCameras() | Re-enumerate (auto-called on devicechange) |
| switchCamera(deviceId) | Hot-swap : new video track replaces the old one in the canvas compositor |
Runtime defaults (.env) :
STUDIO_CAMERA_DEVICE_ID= # exact deviceId (read DevTools console: navigator.mediaDevices.enumerateDevices())
STUDIO_CAMERA_LABEL_HINT= # robust fallback : 'Logitech', 'OBS', 'USB', …The label hint survives a Chrome privacy reset (which rotates deviceIds) — pick the hint when in doubt. If both are set, STUDIO_CAMERA_DEVICE_ID wins, STUDIO_CAMERA_LABEL_HINT is the fallback.
Note —
switchCameraonly replaces the video track. The microphone tracks already in the audio mix keep streaming uninterrupted, so you do not hear a glitch in the recorded audio.
Multi-track audio chain (v1.11.18 → v1.11.21)
Add several audio tracks to the editor (e.g. présentation → nostalgie → action → conquête), each with a per-track start / end. At export the server pipeline applies adelay + atrim + amix :
<VideoEditor
source={blob}
defaultFormat="mp4"
exportUrl="/api/compose"
/>Inside the editor sidebar, Add audio track lets you upload .mp3 / .wav / .webm files and set per-track startSec / endSec. Tracks play live (HTML <audio> synchronized with <video>) and are mixed into the final export.
Pixabay (royalty-free) is referenced in the UI as a recommended source for music tracks.
The export pipeline rejects video files that have no decodable video stream (HTTP 422) and silently injects
anullsrcif the source has no audio — both viaffprobeintrospection (v1.11.22 / v1.11.23).
Ad-slot reservations (v1.11.21+)
Reserve placements for advertising — preroll, midroll, postroll, banner — without committing to a vendor :
type AdSlotKind = 'preroll' | 'midroll' | 'postroll' | 'banner'
interface AdSlot {
id: string
kind: AdSlotKind
startSec: number // when the slot fires (midroll / banner)
endSec?: number // banner duration
label?: string // free-form note
}Slots are persisted on the ProjectFile (adSlots?: AdSlot[]) and exported as a JSON sidecar — wire them later to AdMob, Facebook Ads or your own ad server.
Logo / signature watermark (v1.11.24+)
Apply a logo or signature on the exported video — overlaid via ffmpeg. Configurable from the sidebar editor or pre-set via .env.
Sidebar controls :
| Control | Range | Default |
|---|---|---|
| Logo file | PNG / JPG | none |
| Position | top-right · top-left · bottom-right · bottom-left | top-right |
| Size | 1 → 50 % of video width | 10 |
| Opacity | 0 → 1 | 0.85 |
| Margin | 0 → 20 % of video width | 2 |
Preview : a CSS overlay shows the logo at the requested corner before export (live).
Runtime defaults (.env) :
STUDIO_LOGO_URL=/logo.png
STUDIO_LOGO_POSITION=top-right
STUDIO_LOGO_SIZE_PCT=10
STUDIO_LOGO_OPACITY=0.85
STUDIO_LOGO_MARGIN_PCT=2The server /api/compose route inserts a 3-step filter between subtitle burning and format conversion :
[1:v]scale=floor(${width}*${sizePct/100}):-1,
format=rgba,colorchannelmixer=aa=${opacity}[lg];
[0:v][lg]overlay=W-w-${mxExpr}:${myExpr}:format=auto[v]Place your logo asset under public/logo.png (Next.js serves it). The user can override it from the UI on a per-project basis.
Runtime configuration via @mostajs/config (v1.11.8+)
The bundled scaffold uses a server component wrapper (app/page.tsx) that resolves all STUDIO_* env keys at request time through @mostajs/config. You edit .env, restart PM2, and the studio picks up the new defaults — no Next.js rebuild.
// examples/studio-app/app/page.tsx
import { getEnv, getEnvBool, getEnvNumber } from '@mostajs/config'
import StudioPage from '@mostajs/media/pages/CaptureEditorPage'
export default function Page() {
return <StudioPage defaults={{
recordingMode: getEnv('STUDIO_RECORDING_MODE', 'screen-record'),
outputFormat: getEnv('STUDIO_OUTPUT_FORMAT', 'mp4'),
micEnabled: getEnvBool('STUDIO_MIC_ENABLED', true),
webcamEnabled: getEnvBool('STUDIO_WEBCAM_ENABLED', true),
systemAudioEnabled: getEnvBool('STUDIO_SYSTEM_AUDIO_ENABLED', true),
audioStorage: getEnv('STUDIO_AUDIO_STORAGE', 'memory'),
videoStorage: getEnv('STUDIO_VIDEO_STORAGE', 'indexeddb'),
videoOnly: getEnvBool('STUDIO_VIDEO_ONLY', false),
logo: getEnv('STUDIO_LOGO_URL') ? {
url: getEnv('STUDIO_LOGO_URL'),
position: getEnv('STUDIO_LOGO_POSITION', 'top-right'),
sizePct: getEnvNumber('STUDIO_LOGO_SIZE_PCT', 10),
opacity: getEnvNumber('STUDIO_LOGO_OPACITY', 0.85),
marginPct: getEnvNumber('STUDIO_LOGO_MARGIN_PCT', 2),
} : undefined,
/* …all other STUDIO_* keys… */
}} />
}Profile cascade
@mostajs/config supports a profile prefix : MOSTA_ENV=DEV → DEV_STUDIO_AUDIO_STORAGE overrides STUDIO_AUDIO_STORAGE. Use it to keep DEV / STAGING / PROD configs in a single .env.
Legacy NEXT_PUBLIC_* fallback
If neither STUDIO_* nor the server wrapper are in place, the client page still honours the legacy build-time inlined NEXT_PUBLIC_* keys (NEXT_PUBLIC_RECORDING_MODE_DEFAULT, NEXT_PUBLIC_AUDIO_STORAGE_DEFAULT, …). Document them in .env.example for back-compat.
See examples/studio-app/.env.example for the full set of supported keys.
How to use — Server
The package ships server-side Next.js API route handlers for ffmpeg export and project persistence. Your Next.js host project just re-exports them.
1. Install server dependencies
npm install @mostajs/media @mostajs/orm better-sqlite3Ensure ffmpeg is on PATH (for the compose route).
2. Create the API routes (thin wrappers)
app/api/compose/route.ts — ffmpeg video assembly
export { POST } from '@mostajs/media/server/compose-route'
export const runtime = 'nodejs'
export const maxDuration = 300app/api/projects/route.ts — list + create projects
export { GET, POST } from '@mostajs/media/server/projects-list-route'
export const runtime = 'nodejs'app/api/projects/[id]/route.ts — get + update + delete project
export { GET, PUT, DELETE } from '@mostajs/media/server/projects-id-route'
export const runtime = 'nodejs'3. Optional — Use the full capture/editor page
If you want the complete capture + editor UI (config form, recorder, editor, project list) as a single page :
// app/capture/page.tsx
export { default } from '@mostajs/media/pages/CaptureEditorPage'This page provides :
- 📂 Open existing mp4/webm/gif for editing
- 💾 Saved projects list (via
/api/projects+@mostajs/ormSQLite) - 🖼 Create from images — multi-select screenshots → slideshow
- Screen recorder → editor → export
- Format recommendations table (duration × format → target size)
- Warnings per format (GIF > 45s slow, WebM re-encode, MP4 instant)
4. Cross-origin isolation (only if using client-side ffmpeg.wasm)
Only needed when exportUrl is NOT set (client-side fallback) :
// next.config.mjs
export default {
async headers() {
return [{
source: '/:path*',
headers: [
{ key: 'Cross-Origin-Opener-Policy', value: 'same-origin' },
{ key: 'Cross-Origin-Embedder-Policy', value: 'require-corp' },
],
}]
},
}When exportUrl="/api/compose" is set, ffmpeg runs server-side — no COEP needed.
5. Server compose pipeline
The /api/compose route handles the full pipeline :
- Receives multipart body : manifest JSON + source blobs + sticker PNGs
- Remuxes webm sources (fixes MediaRecorder duration metadata)
- Per-clip : trim + speed + overlay stickers via
filter_complex(libx264 ultrafast intermediate) - Concat via ffmpeg concat demuxer
- Burn subtitles (if requested) via
drawtextfilter chain - Post-process : MP4 (instant copy), GIF (2-pass palette + diff_mode=rectangle for slideshows), WebM (VP8 realtime)
- Returns the final blob
Performance for 2min source :
| Format | Time | Notes | |---|---|---| | MP4 | ~20s | No re-encode (concat copy) | | GIF (slideshow) | ~15s | 5fps + diff_mode for static images | | GIF (video) | ~40s | 15fps, 2-pass palette | | WebM | ~30s | VP8 realtime re-encode |
6. Project persistence schema
Projects are stored in ./data/projects.sqlite via @mostajs/orm. Schema :
{
name: 'Project',
collection: 'projects',
timestamps: true,
fields: {
name: { type: 'string', required: true },
data: { type: 'text', required: true }, // JSON: clips + subtitles + settings
},
}Image clips are embedded as base64 inlineData in the JSON. Video sources are referenced by filename (user re-selects on reload).
Hooks reference
| Hook | Purpose |
|---|---|
| useCamera() | Low-level webcam access (start, stop, capture frame, switch camera) |
| useVideoRecorder() | Webcam video recording with duration timer |
| useScreenCapture() | Screen sharing + recording, webcam baked in (canvas compositing), swap toggle, audio analyser, separate audio file, 4 chunk-storage strategies for both audio and video, paired baseId |
| useVideoEditor() | Full editor state : clips, overlays, subtitles, multi-track audio, logo, ad slots, project persistence, export |
Image utilities
import {
resizeImage, rotateImage, flipImage, cropImage,
adjustBrightness, adjustContrast,
dataUrlToBlob, fileToDataUrl,
} from '@mostajs/media/lib/image-utils'Subtitle utilities
import { toSRT, toVTT, parseSRT, downloadText } from '@mostajs/media/lib/subtitle-format'
const srt = toSRT(subtitles, 'en') // → SRT string for English track
const vtt = toVTT(subtitles, 'fr') // → WebVTT string for French track
const imported = parseSRT(srtText, 'en') // → Subtitle[] from .srt file
downloadText(srt, 'subtitles-en.srt') // → browser file downloadIntegration in host project
This module exports React components, hooks, server route handlers, and menu contributions — but does NOT create Next.js pages. The host project creates thin wrapper pages that import from @mostajs/media.
Pages to create
| Route | Content |
|---|---|
| app/capture/page.tsx | export { default } from '@mostajs/media/pages/CaptureEditorPage' |
| app/dashboard/media/capture/page.tsx | <ImageCapture /> |
| app/dashboard/media/video/page.tsx | <VideoCapture /> |
| app/dashboard/media/gallery/page.tsx | <MediaGallery /> |
| app/dashboard/media/screen/page.tsx | <ScreenRecorder /> |
Menu contribution
import { mediaMenuContribution } from '@mostajs/media/lib/menu'
// Pass to buildMenuConfig() of @mostajs/menuChangelog
- 1.12.0 — Multi-camera :
useScreenCapture()retournecameras: MediaDeviceInfo[],selectedCameraId,refreshCameras(),switchCamera(deviceId). Le switch remplace le track vidéo de la webcam in-place dans le canvas-compositor — leMediaRecorderne s'arrête pas, l'audio mix (mic) est préservé, aucun trou dans les chunks.startScreenShare({ cameraDeviceId, cameraLabelHint })pour pré-sélectionner une caméra.<RecorderPreview />affiche un<select>superposé en haut-gauche quand plusieurs caméras sont détectées (masqué sinon). Listenerdevicechangeautomatique pour le hotplug. Defaults runtime :STUDIO_CAMERA_DEVICE_ID(deviceId stable) /STUDIO_CAMERA_LABEL_HINT(fragment de label, robuste face aux rotations d'IDs après reset privacy). - 1.11.24 — Logo / signature watermark : sidebar editor (file picker, position, size %, opacity, margin %), live CSS preview overlay, ffmpeg
[1:v]scale,format=rgba,colorchannelmixer=aa=opacity → overlaychain inserted between subtitle burn and format conversion. Runtime defaults viaSTUDIO_LOGO_URL/STUDIO_LOGO_POSITION/STUDIO_LOGO_SIZE_PCT/STUDIO_LOGO_OPACITY/STUDIO_LOGO_MARGIN_PCT. - 1.11.23 — Source video without an audio stream no longer crashes export.
compose-routerunsffprobeon each clip and injectsanullsrcfor clips lacking audio so the multi-trackamixfilter graph stays valid. - 1.11.22 — Reject MP3 (or any non-video) files used as a video clip — server
ffprobereturns HTTP 422 with a clear message; client also pre-validates on add. Fixes the previousStream specifier ':v' matches no streamscrash. - 1.11.21 —
<YouTubePublishButton />(Google Identity Services OAuth2 implicit flow + YouTube Data API v3 resumable upload, fully client-side, readsNEXT_PUBLIC_STUDIO_YOUTUBE_CLIENT_ID). Ad-slot reservations onProjectFile.adSlots(preroll / midroll / postroll / banner) — UI to declare them, JSON sidecar export. - 1.11.20 — Multi-track audio : per-track
startSec/endSechonoured at export viaadelay+atrimin the serveramixfilter graph. - 1.11.19 — Pixabay (royalty-free music) referenced in the audio-track sidebar as a recommended source.
- 1.11.18 — Multi-track audio chain : add several tracks (e.g. présentation → nostalgie → action → conquête), drag to reorder, mixed at export with
amix. - 1.11.17 — Hydration mismatch on
videoOnlycheckbox fixed viamountedstate guard. - 1.11.16 — Paired
baseId(rec-<ts>-<rand>) shared between separate audio and video files →<id>-video.webm+<id>-audio.webmcorrelate automatically. - 1.11.15 — Audio tracks now play live during preview : a synchronized
<audio>element follows the<video>element (was previously only mixed at export). - 1.11.13 — Filesystem / server storage strategies returned an empty blob when stopping (chunks not in memory). Fix : refetch via the object URL or
fileHandle.getFile()before constructing the blob. ResolvesMEDIA_ERR_SRC_NOT_SUPPORTEDon T3 (server) / T4 (filesystem). - 1.11.12 — Light theme migration aligned with the orm.amia.fr palette (
#ffffffbackground,#0891b2accent,#475569soft text, Inter font). Demo source synchronized withexamples/studio-app/app/*scaffold (was still on the old dark layout). - 1.11.11 — Hydration mismatch on the
filesystemoption fixed : SSR-safefsAccessSupportedstate viauseEffectinstead oftypeof windowduring render. - 1.11.10 —
videoOnlyflag : the recorded video file no longer embeds an audio track, the audio is forced into a separate file. Use case : pro montage / independent audio cleanup. - 1.11.9 — Generic chunk-storage factory (
lib/chunk-storage.ts) shared by audio AND video : 4 strategies (memory,indexeddb,filesystem,server),kind: 'audio' | 'video'parameter,baseIdfor shared sessionId. Default video storage =indexeddb(crash-safe, constant RAM). - 1.11.8 — Runtime configuration via
@mostajs/config: server-component wrapper (examples/studio-app/app/page.tsx) resolvesSTUDIO_*env keys at request time. Edit.env, restart PM2, no rebuild needed. Profile cascade :MOSTA_ENV=DEV→DEV_STUDIO_*keys override. - 1.11.7 —
<AudioOnlyRecorder />: standalone audio-only mode that bypasses screen capture, internalAudioContext+AnalyserNode+ canvas VU meter, supports the same 4 chunk-storage strategies. AlsorecordAudioSeparatelyoption onuseScreenCapture()to produce a separate audio file alongside the video. - 1.11.6 — Sanitized
examples/studio-app/(no enterprise data leaked). - 1.11.5 —
<AudioLevelMeter />: 32-bar VU meter via<canvas>+requestAnimationFrame+ Web AudioAnalyserNode. Camera ↔ screen swap toggle exposed byuseScreenCapture(). - 1.11.4 — Webcam baked into the recording via canvas compositing :
requestAnimationFrameloop drawsscreen + webcamonto a hidden<canvas>,canvas.captureStream(30)feeds theMediaRecorder(no PiP loss).<RecorderPreview />always-mounts the webcam<video>(display toggle) to avoid the conditional-mount race condition.Recordbutton enabled only after webcamgetUserMediaresolves. - 1.10.x — Production-ready scaffold under
examples/studio-app/: Next.js 16 + Turbopack,start-studio.sh/deploy.sh/install.sh/update.sh, Apache vhost template, PM2 ecosystem config, Let's Encrypt automation. Live demo deployed at https://media.amia.fr/. - 1.9.0 — Server integration :
server/compose-route,server/projects-*-route,server/project-db(ORM+SQLite).pages/CaptureEditorPage(full capture+edit+save page)..npmignore+.gitignorefor enterprise-private dirs. - 1.8.0 — Project settings persist
burnSubtitles+burnLang. Delete project with confirmation. - 1.7.x —
initialSubtitlesprop, SRT import fix, blob reconstitution frominlineDataon project load. - 1.6.0 —
saveProjectJSON/loadProjectJSON/getProjectData/setProjectDataonuseVideoEditor.onSaveRequestedprop on VideoEditor. - 1.5.x — Subtitle system : multi-language (8 langs), per-subtitle styling, SRT/VTT export, burn into video via drawtext, import .srt.
- 1.4.0 —
initialClips+initialBlobMapprops (multi-image slideshow builder). - 1.3.x — Server-side export via
exportUrlprop (bypass ffmpeg.wasm COEP issues). VP8 realtime for webm.probeBlobDurationfor MediaRecorder webm. GIF diff_mode optimization for slideshows. - 1.2.0 —
VideoEditor+useVideoEditor, ffmpeg.wasm client-side, 7-icon sticker tray, webm/gif/mp4 export. - 1.1.1 —
ScreenRecorder+useScreenCapture, floating FAB widget. - 1.0 —
ImageCapture,VideoCapture,ImageEditor,MediaGallery,useCamera,useVideoRecorder, image utils.
License
AGPL-3.0-or-later + commercial license available — contact [email protected].
Author
Dr Hamid MADANI — [email protected]
