@basementstudio/shader-lab
v1.2.3
Published
Portable React runtime for Shader Lab exports.
Downloads
709
Maintainers
Keywords
Readme
@basementstudio/shader-lab
@basementstudio/shader-lab is a portable React runtime for Shader Lab compositions exported from the editor.
It supports three main use cases:
- Render a composition directly in React
- Use a composition as a texture in your own scene
- Use Shader Lab effect layers as postprocessing over your own scene
Install
npm install @basementstudio/shader-lab threebun add @basementstudio/shader-lab threePeer Dependencies
reactreact-domthree
Requirements
- The runtime requires WebGPU support in the browser
ShaderLabCompositionand the hooks in this package are client-side APIs- Media layers expect accessible asset URLs in
layer.asset.src - Composition texture output is canvas-backed and can be consumed by WebGL or WebGPU host scenes
- Postprocessing must run on the same
WebGPURendereras the scene texture you pass in
Supported effect layers include ASCII, CRT, directional blur, dithering, halftone, ink, particle grid, pattern, pixelation, pixel sorting, posterize, slice, edge detect, displacement map, and chromatic aberration.
API Overview
High-level API
ShaderLabCompositionuseShaderLab
Advanced APIs
useShaderLabCanvasSourceuseShaderLabPostProcessingSourceuseShaderLabTextureSourceShaderLabCanvasSourceShaderLabPostProcessingSourceShaderLabTextureSource
Use useShaderLab unless you specifically need manual timing or lifecycle control.
1. Render a Composition
"use client"
import {
ShaderLabComposition,
type ShaderLabConfig,
} from "@basementstudio/shader-lab"
const config: ShaderLabConfig = {
composition: {
width: 1512,
height: 909,
},
layers: [],
timeline: {
duration: 6,
loop: true,
tracks: [],
},
}
export default function Example() {
return (
<div style={{ width: "100%", maxWidth: 1200 }}>
<ShaderLabComposition config={config} />
</div>
)
}Handle runtime errors:
<ShaderLabComposition
config={config}
onRuntimeError={(message) => {
console.error(message)
}}
/>2. Use a Composition as a Texture
useShaderLab can manage the runtime and return a ready-to-use THREE.CanvasTexture.
"use client"
import {
useShaderLab,
type ShaderLabConfig,
} from "@basementstudio/shader-lab"
import { OrbitControls, PerspectiveCamera } from "@react-three/drei"
import { Canvas, useFrame } from "@react-three/fiber"
import { useMemo, useRef } from "react"
import * as THREE from "three"
import { WebGPURenderer } from "three/webgpu"
function Scene({ config }: { config: ShaderLabConfig }) {
const { texture } = useShaderLab(config, {
width: 1024,
height: 1024,
})
const material = useMemo(
() => new THREE.MeshBasicMaterial({ color: 0xffffff }),
[]
)
const meshRef = useRef<THREE.Mesh | null>(null)
useFrame((_, delta) => {
if (meshRef.current) {
meshRef.current.rotation.y += delta * 0.3
}
if (texture) {
texture.needsUpdate = true
material.map = texture
material.needsUpdate = true
}
})
return (
<>
<color attach="background" args={["#0a0a0a"]} />
<PerspectiveCamera makeDefault fov={50} position={[0, 0, 4]} />
<mesh ref={meshRef} material={material}>
<sphereGeometry args={[1, 64, 64]} />
</mesh>
<OrbitControls enableDamping />
</>
)
}
export function TexturedExample({ config }: { config: ShaderLabConfig }) {
return (
<Canvas
gl={async (props) => {
const renderer = new WebGPURenderer(
props as ConstructorParameters<typeof WebGPURenderer>[0]
)
await renderer.init()
return renderer
}}
>
<Scene config={config} />
</Canvas>
)
}Texture Notes
textureis backed by the package's internal output canvascanvasis also available fromuseShaderLabif you want to integrate at the raw canvas level- The runtime updates the composition for you internally
- This texture path is portable across WebGL and WebGPU host scenes because the package output is canvas-backed
3. Use Shader Lab as Postprocessing
For postprocessing, useShaderLab exposes a postprocessing handle.
You render your scene into a texture, then hand that texture to Shader Lab.
"use client"
import {
useShaderLab,
type ShaderLabConfig,
} from "@basementstudio/shader-lab"
import { Canvas, useFrame, useThree } from "@react-three/fiber"
import { PerspectiveCamera } from "@react-three/drei"
import { createPortal } from "@react-three/fiber"
import { useEffect, useMemo, useRef } from "react"
import * as THREE from "three"
import { float, texture as tslTexture, uv, vec2 } from "three/tsl"
import { MeshBasicNodeMaterial, WebGPURenderer } from "three/webgpu"
function PostProcessedScene({ config }: { config: ShaderLabConfig }) {
const { gl, scene, size } = useThree()
const renderer = gl as unknown as WebGPURenderer
const cameraRef = useRef<THREE.PerspectiveCamera | null>(null)
const sceneTargetRef = useRef<THREE.WebGLRenderTarget | null>(null)
const { postprocessing } = useShaderLab(config, {
renderer,
width: size.width,
height: size.height,
})
const presentScene = useMemo(() => new THREE.Scene(), [])
const presentCamera = useMemo(
() => new THREE.OrthographicCamera(-1, 1, 1, -1, 0, 1),
[]
)
const presentTextureNode = useMemo(() => {
const sampleUv = vec2(uv().x, float(1).sub(uv().y))
return tslTexture(new THREE.Texture(), sampleUv)
}, [])
const presentMaterial = useMemo(() => {
const material = new MeshBasicNodeMaterial()
material.colorNode = presentTextureNode.rgb
return material
}, [presentTextureNode])
useEffect(() => {
const target = new THREE.WebGLRenderTarget(size.width, size.height)
sceneTargetRef.current = target
return () => {
sceneTargetRef.current = null
target.dispose()
}
}, [size.height, size.width])
useEffect(() => {
sceneTargetRef.current?.setSize(size.width, size.height)
postprocessing.resize(size.width, size.height)
}, [postprocessing, size.height, size.width])
useFrame((state, delta) => {
const target = sceneTargetRef.current
const camera = cameraRef.current
if (!(target && camera && postprocessing.ready)) return
renderer.setRenderTarget(target)
renderer.render(scene, camera)
const output = postprocessing.render(
target.texture,
state.clock.elapsedTime,
delta
)
if (output) {
presentTextureNode.value = output
}
renderer.setRenderTarget(null)
renderer.render(presentScene, presentCamera)
}, 1)
return (
<>
<PerspectiveCamera ref={cameraRef} makeDefault position={[0, 2, 5]} />
<mesh position={[-1.2, 0, 0]}>
<boxGeometry args={[1.4, 1.4, 1.4]} />
<meshStandardMaterial color="#e74c3c" />
</mesh>
<mesh position={[1.2, 0, 0]}>
<sphereGeometry args={[0.8, 32, 32]} />
<meshStandardMaterial color="#3498db" />
</mesh>
{createPortal(
<mesh frustumCulled={false}>
<planeGeometry args={[2, 2]} />
<primitive attach="material" object={presentMaterial} />
</mesh>,
presentScene
)}
</>
)
}
export function PostProcessingExample({
config,
}: {
config: ShaderLabConfig
}) {
return (
<Canvas
gl={async (props) => {
const renderer = new WebGPURenderer(
props as ConstructorParameters<typeof WebGPURenderer>[0]
)
await renderer.init()
return renderer
}}
>
<PostProcessedScene config={config} />
</Canvas>
)
}Postprocessing Notes
- Postprocessing is same-renderer only
- Pass your scene texture from the same
WebGPURendereryou gave touseShaderLab - WebGL host renderers are not supported for postprocessing
- Effect-only Shader Lab configs are the most natural fit here
postprocessing.texturealways points to the latest output texture
useShaderLab
const { canvas, ready, texture, postprocessing } = useShaderLab(config, options)Options
| Option | Description |
| --- | --- |
| width | Output width for the internal runtime |
| height | Output height for the internal runtime |
| pixelRatio | Optional pixel ratio override |
| canvas | Optional existing canvas to render the composition into |
| renderer | Optional shared WebGPURenderer used for postprocessing |
Return Value
| Field | Description |
| --- | --- |
| canvas | Internal output canvas |
| ready | true when the managed texture path is initialized |
| texture | Managed THREE.CanvasTexture backed by the runtime canvas |
| postprocessing | Handle for same-renderer postprocessing |
postprocessing
| Field | Description |
| --- | --- |
| ready | true when the postprocessing source is initialized |
| texture | Latest processed output texture |
| resize(width, height, pixelRatio?) | Resize the runtime targets |
| render(inputTexture, time, delta) | Render Shader Lab effects into an output texture |
Advanced APIs
These APIs are available when you want more direct control over timing, resize behavior, or object lifecycle.
Use them when:
- You already have your own render loop and want to drive updates manually
- You want the raw output canvas instead of a managed
CanvasTexture - You want to own the full postprocessing pipeline yourself
- You need to use the runtime outside React (but still in a browser/client runtime)
Hooks
useShaderLabCanvasSourceuseShaderLabPostProcessingSourceuseShaderLabTextureSourceuseShaderLabTexture
useShaderLabCanvasSource
Use this when you want the raw canvas and manual timing.
const { canvas, ready, resize, update } = useShaderLabCanvasSource(config, {
width: 1024,
height: 1024,
})
useFrame((state, delta) => {
if (!ready) return
update(state.clock.elapsedTime, delta)
})This is useful when:
- You want to create your own
THREE.CanvasTexture - You want to feed the canvas into some other pipeline
- You want to decide exactly when the composition updates
useShaderLabPostProcessingSource
Use this when you want to control the full scene-to-texture-to-screen flow yourself.
const { ready, error, resize, texture, update } =
useShaderLabPostProcessingSource(effectConfig, {
renderer,
width: size.width,
height: size.height,
})
useFrame((state, delta) => {
if (!(ready && sceneTarget)) return
renderer.setRenderTarget(sceneTarget)
renderer.render(scene, camera)
const output = update(
sceneTarget.texture,
state.clock.elapsedTime,
delta
)
renderer.setRenderTarget(null)
// present(output)
})This is useful when:
- You already have your own render targets
- You want to decide how the final output is presented
- You need lower-level access than
useShaderLab().postprocessing
useShaderLabTextureSource
This exposes the lower-level internal texture source.
Use it only if you specifically want that internal texture path and understand the renderer constraints. For most cases, useShaderLab or useShaderLabCanvasSource is the better choice.
Classes
ShaderLabCanvasSourceShaderLabPostProcessingSourceShaderLabTextureSource
The class APIs are the non-React equivalents of the hooks.
Typical flow:
const source = new ShaderLabCanvasSource(config, {
width: 1024,
height: 1024,
})
await source.initialize()
source.update(time, delta)
source.resize(width, height)
source.dispose()Use the classes when:
- You are outside React
- You want full manual lifecycle control
- You are integrating Shader Lab into a custom runtime or framework
Included Runtime Support
- Gradient
- Text
- Custom shader
- Image and video sources
- Live camera input
- ASCII
- Pattern
- Ink
- Halftone
- Dithering
- CRT
- Particle grid
- Pixel sorting
Utility Exports
createRuntimeClockadvanceRuntimeClockbuildRuntimeFrameevaluateTimelineForLayersresolveEvaluatedLayers- Runtime config and timeline types
Notes
ShaderLabCompositionpreserves the exported aspect ratio and fills its container widthuseShaderLabis the recommended entry point for most app integrations- If you only need direct composition rendering in the DOM, use
ShaderLabComposition - If you need full manual control, use the lower-level source APIs
- Texture output is portable because it is backed by a managed canvas
- Postprocessing currently requires a shared
THREE.WebGPURenderer - If you use React Three Fiber with
[email protected], you may see aTHREE.Clockdeprecation warning from@react-three/fiber; that warning is upstream and not emitted by this package
Links
- Website: basement.studio
- npm: @basementstudio/shader-lab
