@itsuzef/openclaw-openmontage
v1.0.3
Published
OpenClaw plugin — AI video production via a local OpenMontage workspace
Maintainers
Readme
openclaw-openmontage
An OpenClaw plugin that adds AI video production capabilities via a local OpenMontage workspace.
What it does
Injects skills that teach OpenClaw agents to delegate video production tasks — animated explainers, cinematic trailers, avatar videos, screen demos, motion graphics, and more — to your local OpenMontage installation.
The plugin contributes no runtime JS code. It is a skills package: structured markdown files that get injected into the agent's system prompt and teach it how to engage OpenMontage's instruction-driven pipeline system.
Requirements
- OpenClaw ≥ 2026.1.26
- OpenMontage cloned locally and set up
(run
make setupin the OpenMontage repo)
Install
openclaw install @itsuzef/openclaw-openmontageOr from a local checkout:
{
"plugins": {
"installs": [
{ "source": "path", "sourcePath": "/path/to/openclaw-openmontage" }
]
}
}Configure
openclaw config set plugins.entries.openmontage.enabled true
openclaw config set plugins.entries.openmontage.config.workspacePath /path/to/OpenMontageOr in your openclaw config JSON directly:
{
"plugins": {
"entries": {
"openmontage": {
"enabled": true,
"config": {
"workspacePath": "/Users/you/Documents/OpenMontage"
}
}
}
}
}Usage
Once configured, ask your OpenClaw agent anything video-related:
- "Make a 60-second animated explainer about how black holes form"
- "Create a Ghibli-style animated video of an underwater city at dusk"
- "Build a cinematic trailer for our product launch"
- "Make a screen demo showing how to install our CLI tool"
- "Turn this blog post into a 90-second video"
- "25-second kinetic typography launch reel using HyperFrames"
The agent will run preflight, present a production plan and cost estimate, and execute the pipeline with human approval gates at each creative stage.
Cost
Most pipelines have a zero-cost path using free tools (Piper TTS + stock media + Remotion + FFmpeg). Premium paths add AI image/video generation and ElevenLabs TTS.
| Path | Cost | Unlocks | |---|---|---| | Zero key | $0 | Pipelines with a free path (explainer, animation, screen-demo, hybrid, clip-factory) | | FAL_KEY | ~$0.15–$1.50 | FLUX AI image generation | | Full setup | ~$1–$3 | Video gen (Veo/Kling/Runway/Seedance) + ElevenLabs + music |
Pipelines
| Pipeline | Best For | Status |
|---|---|---|
| animated-explainer | Explainers, educational content, data viz | production |
| cinematic | Trailers, teasers, hype edits | production |
| animation | Motion graphics, Ghibli/anime-style | production |
| screen-demo | CLI demos, developer walkthroughs | production |
| avatar-spokesperson | Presenter videos, announcements | production |
| hybrid | Source footage + AI support visuals | production |
| talking-head | Speaker videos | beta |
| clip-factory | Multi-clip from long source | beta |
| podcast-repurpose | Podcast highlights | beta |
| localization-dub | Translate and dub | beta |
| documentary-montage | Real-footage montage, tone-poem style | beta |
How it works
OpenMontage is agent-driven: the AI agent reads pipeline manifests and stage director skills, makes creative decisions, and uses Python tools for actual generation. This plugin teaches OpenClaw agents:
- What OpenMontage can produce (
skills/openmontage/openmontage.md) - How to engage the workspace — preflight, pipeline selection, plan presentation
(
skills/openmontage/delegate.md) - Which pipeline fits which brief (
skills/openmontage/pipelines.md)
The agent navigates to your OpenMontage workspace using its existing file and shell
tools, reads AGENT_GUIDE.md from that workspace, and follows the instruction-driven
pipeline system from there.
Contributing
Improvements to the skill content are the most valuable contributions. If OpenMontage adds a new pipeline or changes its capability surface, update the matching skill file.
- Fork and clone this repo
- Edit
skills/openmontage/*.md - Test by pointing your local plugin install at the fork
- Open a PR with a clear description of what changed and why
