mtr-facemesh
v6.4.1
Published
Face Mesh for MTR projects.
Readme
mtr-facemesh
Contents
Description
The current repository is the mtr-facemesh project. The module projects 3D masks (in this case make ups) on a face from a video element detected by an AI model.
Usage
In order to use the module, add the div with id="facemesh-holder" and set the width and height of the canvas to show.
<div id="facemesh-holder" style="width: 282px; height: 500px;"></div>After that, you can create the FaceMesh object with the following.
import {FaceMesh, THREE} from './lib/mtr_facemesh.js';
...
const config = {
divName: "facemesh-holder",
modelConfigs: undefined,
meshConfigs: [
{
alphaMapDir: './assets/imgs/face/mask_lips.jpg',
ambientOcclusionMapDir: './assets/imgs/face/ambient_occlusion.png',
normalMapDir: './assets/imgs/face/normal_texture_lips_3.png',
color: 0xff0000,
roughness: 0.25,
metalness: 0.1,
opacity: 0.3,
clearcoat: 0.5,
clearcoatRoughness: 1.0,
}
],
binaryDir: './lib',
refineLandmarks: true,
backend: 'webgl',
flipCamera: true,
verbose: true,
showFPSPanel: true,
showScreenLogs: true,
loadedCallback,
errorCallback,
cameraStatusCallback,
oneEuroFilterConfig: {
mincutoff: 0.005,
beta: 0.12,
dcutoff: 1,
},
faceRetouchConfig: {
retouchAlphaMask: './assets/imgs/face/facemesh_alpha_base_v4.jpg',
opacity: 1.0,
blurRadius: 10,
},
};
faceMesh = new FaceMesh(config);Where config is an object with properties:
divNameHTML div ID of the AR holder. Default is facemesh-holder.imageIDHTML image element ID that will replace the camera.modelConfigobject of configuration for the 3D model:dir: 3D model file path. File must be .glb or .gltf. No default.animationSpeed: 3D model animation speed. (0, 1) for slower animations and animationSpeed > 1 for faster animations. Defaults to 1.scale: Sets how big the 3D model will be. Defaults to 1.position: position of the model considering camera is at {0, 0, 0}. Defaults to global {0, 0,-distance}.rotation: rotation of the model. Defaults to {0, 0, 0}.loop: if the model should loop. Default is true.showMannequin: show the head occluder to better position the object. Default is false.
meshConfigsArray of configuration for each make up mesh:colorMapDir: texture file path. File must be an image. No default.alphaMapDir: alpha mask file path. File must be an image. No default.ambientOcclusionMapDir: illumination mask file path. File must be an image. No default.normalMapDir: normal map file path. File must be an image. No default.color: make up color in hexadecimal. Default is 0xffffff.metalness: how much the material is like a metal. Wood or stone use 0.0, metallic use 1.0. Default is 0.0.opacity: float in the range of 0.0 - 1.0 indicating how transparent the material is. Default is 1.0.clearcoat: use clear coat related properties to enable multilayer materials that have a thin translucent layer over the base layer. Default is 0.0.roughness: represents the intensity of the clear coat layer, from 0.0 to 1.0. Default is 0.0.clearcoatRoughness: roughness of the clear coat layer, from 0.0 to 1.0. Default is 0.0.
binaryDirPath to the face mesh binary files. No default.refineLandmarksRefine landmarks option. Default is true.backendTensorflow backend: webgl, wasm or cpu. Default is webgl.flipCameraFlips the feedback camera. Default is false.verboseIf set to true shows logs on the console. Default is false.loadedCallbackFunction to call after the AI model is loaded. No default.cameraStatusCallbackFunction to call after the video permission (no default):- 'requesting': when user is requested for the camera
- 'hasStream': when user is granted access for the camera
- 'failed': when something went wrong with the camera
errorCallbackFunction to call after any error. No default.showFPSPanelIf set to true shows the FPS on the main loop. Default is false.showScreenLogsIf set to true shows the FPS for every step on the main loop. Default is false.oneEuroFilterConfigOne Euro Filter parameters.- 'mincutoff': minimum cutoff frequency. Decreasing the minimum cutoff frequency decreases slow speed jitter. Default is 0.001.
- 'beta': speed coefficient. Increasing the speed coefficient decreases speed lag. Default is 0.007.
- 'dcutoff': cutoff frequency. Default is 1.
faceRetouchConfigface retouch parameter (one object).- 'retouchAlphaMask': path to the alpha mask file. Only apply the retouch on the mask. No default.
- 'blurRadius': Face retouch intensity. Default is 10.
- 'color': Foundation color. Default is 0xffffff (white).
- 'opacity': Foundation opacity. Default is 0.0 (transparent).
faceRetouchConfigsface retouch parameters array (multiple face retouch objects). Each object of the array must contain:- 'retouchAlphaMask': path to the alpha mask file. Only apply the retouch on the mask. No default.
- 'blurRadius': Face retouch intensity. Default is 10.
- 'color': Foundation color. Default is 0xffffff (white).
- 'opacity': Foundation opacity. Default is 0.0 (transparent).
foundationMatchConfigIf set, executes the foundation match algorithm after togglePhoto method.- 'option': tone algorithm option.
- 'colors': array of foundation colors in RGB and its names.
The method requestPermissions must be called after the model is loaded using the loadedCallback config property, since the face detector only starts predicting after the video is set. The method receives the camera configuration as a parameter, the same as seen at getUserMedia. Inside loadedCallback you must also add lights to the scene since the textures are very sensible to it. The following code is an example of one loadedCallback.
const loadedCallback = function () {
scene = faceMesh.getScene();
const hemiLight = new THREE.HemisphereLight(0xffffff, 0x080820, 0.5);
scene.add(hemiLight);
const ambientLight = new THREE.AmbientLight(0x404040, 0.1);
scene.add(ambientLight);
faceMesh.requestPermissions({
facingMode: {ideal: 'user'},
width: window.innerHeight,
height: window.innerWidth,
});
};With these lines of code it's already possible to run the face mesh. If you want to personalize the threejs scene, you can do it by using the get methods faceMesh.getRenderer(), faceMesh.getScene(), faceMesh.getOrthographicCamera(), faceMesh.getPerspectiveCamera(), faceMesh.getGroup() and faceMesh.getMeshes(). For example, you can change material params with faceMesh.getMeshes()[0].material.color = new THREE.Color(0xff0000);.
Features
The AR object also include other functionalities.
Partial Render
The partial render works like a curtain for the threejs renderer and only renders what is from 0 to windowSlider.value * canvas.width. For this, you need to create a checkbox and a slider from 0 to 1 and then use them as follows.
const windowCheckbox = document.getElementById('window-checkbox');
const windowSlider = document.getElementById('window-slider');
windowCheckbox.oninput = function () {
faceMesh.setWindow(windowCheckbox.checked);
windowSlider.style.display = windowCheckbox.checked ? 'block' : 'none';
};
windowSlider.oninput = function () {
faceMesh.setWindowFactor(windowSlider.value);
};Photograph
The photograph functionality is simple and works like it is said. When the method togglePhoto is called, the animation loop freezes/unfreezes depending on the actual state. You only need to create on html and use the following code. The method also returns a promise of a canvas of the photograph.
const screenshotButton = document.getElementById('photo-button');
screenshotButton.onclick = async function () {
const frozenVideoCanvas = await faceMesh.togglePhoto();
};Share
The next feature complements the photograph feature. When the share method is called, the object takes a screenshot and then share it through the selected social media. For that, you only need to create a button to trigger the method.
const shareButton = document.getElementById('share-button');
shareButton.onclick = function () {
faceMesh.share('screenshot');
};Stop and Continue
The application can be stopped or continued after calling .stop() and .continue() methods. They are useful when you want to stop the application with a close button or start the application instantly after the user clicks a start button. The demo shows an example of usage.
const closeButton = document.getElementById('close-button');
var toggleCloseContinue = true;
closeButton.onclick = function () {
if (toggleCloseContinue) {
faceMesh.stop();
closeButton.textContent = 'Continue';
} else {
faceMesh.continue();
closeButton.textContent = 'Close';
}
toggleCloseContinue = !toggleCloseContinue;
};Foundation Color
If faceRetouchConfig and/or faceRetouchConfigs are set, it is possible to set a color on the users face as if there is a foundation using .setFoundation method. The function receives two parameters, color and opacity. The color is a number array with 3 positions in RGB space, and the opacity is a floating point number between 0.0 and 1.0 that is the strength of the color. If there are multiple face retouch objects, you can set the color and opacity with the third parameter index. The following example will make up the face with red foundation.
faceMesh.setFoundation([255, 0, 0], 0.5, 0);Light Exposure
Light exposure is a feature related to face color match since it detects the illumination on the users face. If the user face is under or over exposed, the faceMesh.exposure() will return -2 and 2 respectively.
<button id="exposure-log" style="position: absolute; left: 5%; top: 15%;">Loading</button>function anim() {
document.getElementById(
'exposure-log'
).textContent = `${faceMesh.exposure()}`;
}
setInterval(anim, 500);Face Color Match
To use it, call the method .getFaceClosestColor() to get the object of closest color. This feature matches the users face color to the closest color in foundationMatchConfig.colors and return an object with attributes foundationID, skinColor and distances.
foundationIDis the index of the best foundation ID infoundationMatchConfig.colorsarray;skinColoris the extracted color of the user skin in RGB space;distancesThis attribute is also an object with attributesdistanceandindex. Theindexis the index to thefoundationMatchConfig.colorsarray and thedistanceis the euclidean distance fromfoundationMatchConfig.colors[foundationID]tofoundationMatchConfig.colors[distances[..].index]. This array is sorted in ascending order by thedistanceattribute.
FaceMesh(
...,
foundationMatchConfig: {
option: 1,
colors: [
{color: [203, 176, 145], name: '00'},
{color: [169, 129, 83], name: '50'},
{color: [77, 52, 48], name: '100'},
]
}
)window.afterScreenshot = function afterScreenshot() {
// matches the face color with one of the 3 colors set in foundationMatchConfig
const distances = faceMesh.getFaceClosestColor().distances;
// second best color distance, and index
console.log(distances[1].distance, distances[1].index);
const foundationId = faceMesh.getFaceClosestColor().foundationID;
console.log(window.foundations[foundationId], foundationId);
// uses the setFoundation feature to set the matched color on faceRetouch as a foundation
faceMesh.setFoundation(window.foundations[foundationId].color, 0.3, 0);Using an image instead of camera
If faceMesh is created with imageID set, the video camera will be replaced with the image loaded on html with id=<imageID> and .requestPermissions() method will not be necessary to call. For example, an image tag is loaded with id "photo-id", and faceMesh is created with it replacing the camera.
<img id="photo-id" src="./any-image" />faceMesh = new FaceMesh({
imageID: "photo-id",
...
});Demo
The demos folder have demos with all the features. The photo shows an example of using an image loaded on html and the webcam uses the user camera. The application loads a canvas and shows blush, lipsticks and eye shadows make ups on the face with the face retouch. It is also possible to render partially with the setWindow method, freeze the video like a photograph and share the photo.
WASM
- Activate o EMSDK
source /path/to/emsdk/emsdk_env.sh - Generate the Makefile with cmake command
cd ./wasm/build/ emcmake cmake . - Compile
emmake make - The wasm and js files will be inside output directory
