Overview
This capability processes the OpenGL textures of the camera and outputs blendshape data that meets the standards of Apple ARKit. For details, see ARFaceAnchor. You can pass the data to Unity to drive your model or use the data to implement other features. Integration (Android)
API calls
1. Enable the feature.
//XmagicApi.java
//featureName = XmagicConstant.FeatureName.ANIMOJI_52_EXPRESSION
public void setFeatureEnableDisable(String featureName, boolean enable);
2. Configure the callback of facial keypoints.
Version 2.6.0 and earlier versions use the following method.
void setYTDataListener(XmagicApi.XmagicYTDataListener ytDataListener)
public interface XmagicYTDataListener {
void onYTDataUpdate(String data)
}
Version 3.0.0 uses the following method.
void setAIDataListener(XmagicApi.OnAIDataListener aiDataListener)
public interface OnAIDataListener {
void onFaceDataUpdated(List<TEFaceData> faceDataList);
void onHandDataUpdated(List<TEHandData> handDataList);
void onBodyDataUpdated(List<TEBodyData> bodyDataList);
void onAIDataUpdated(String data);
}
onYTDataUpdate and onAIDataUpdated
returns a JSON string structure that contains the information of up to 5 faces:
{
"face_info":[{
"trace_id":5,
"face_256_point":[
180.0,
112.2,
...
],
"face_256_visible":[
0.85,
...
],
"out_of_screen":true,
"left_eye_high_vis_ratio:1.0,
"right_eye_high_vis_ratio":1.0,
"left_eyebrow_high_vis_ratio":1.0,
"right_eyebrow_high_vis_ratio":1.0,
"mouth_high_vis_ratio":1.0,
"expression_weights":[
0.12,
-0.32
...
]
},
...
]
}
Field descriptions
trace_id: The face ID. If the faces obtained from a continuous video stream have the same face ID, they belong to the same person.
expression_weights: The real-time blendshape data. The array contains 52 elements. The value range of each element is 0-1.0. {"eyeBlinkLeft","eyeLookDownLeft","eyeLookInLeft","eyeLookOutLeft","eyeLookUpLeft","eyeSquintLeft","eyeWideLeft","eyeBlinkRight","eyeLookDownRight","eyeLookInRight","eyeLookOutRight","eyeLookUpRight","eyeSquintRight","eyeWideRight","jawForward","jawLeft","jawRight","jawOpen","mouthClose","mouthFunnel","mouthPucker","mouthRight","mouthLeft","mouthSmileLeft","mouthSmileRight","mouthFrownRight","mouthFrownLeft","mouthDimpleLeft","mouthDimpleRight","mouthStretchLeft","mouthStretchRight","mouthRollLower","mouthRollUpper","mouthShrugLower","mouthShrugUpper","mouthPressLeft","mouthPressRight","mouthLowerDownLeft","mouthLowerDownRight","mouthUpperUpLeft","mouthUpperUpRight","browDownLeft","browDownRight","browInnerUp","browOuterUpLeft","browOuterUpRight","cheekPuff","cheekSquintLeft","cheekSquintRight","noseSneerLeft","noseSneerRight","tongueOut"}
The other fields are facial keypoint information. Whether they are returned depends on the type of license you use. If you only need facial expression data, you can ignore those fields.
Was this page helpful?