Audio2Exp.framework
in the SDK to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework
to Embed & Sign.Audio2Exp.framework
is about 7 MB). Import the two dynamic frameworks Audio2Exp.framework
and YTCommonXMagic.framework
to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework
and YTCommonXMagic.framework
to Embed & Sign.audio2exp.bundle
to your project directory. When calling initWithModelPath:
of Audio2ExpApi
, pass in the path of model file.API | Description |
+ (int)initWithModelPath:(NSString*)modelPath; | Initializes the SDK. You need to pass in the path of the model file. 0 indicates the initialization is successful. |
+ (NSArray )parseAudio:(NSArray )inputData; | The input is audio, which must be one-channel and have a sample rate of 16 Kbps and an array length of 267 (267 sampling points). The output is a float array with 52 elements, which correspond to 52 blendshapes. The value range of each element is 0-1, and their sequence is specified by Apple{"eyeBlinkLeft","eyeLookDownLeft","eyeLookInLeft","eyeLookOutLeft","eyeLookUpLeft","eyeSquintLeft","eyeWideLeft","eyeBlinkRight","eyeLookDownRight","eyeLookInRight","eyeLookOutRight","eyeLookUpRight","eyeSquintRight","eyeWideRight","jawForward","jawLeft","jawRight","jawOpen","mouthClose","mouthFunnel","mouthPucker","mouthRight","mouthLeft","mouthSmileLeft","mouthSmileRight","mouthFrownRight","mouthFrownLeft","mouthDimpleLeft","mouthDimpleRight","mouthStretchLeft","mouthStretchRight","mouthRollLower","mouthRollUpper","mouthShrugLower","mouthShrugUpper","mouthPressLeft","mouthPressRight","mouthLowerDownLeft","mouthLowerDownRight","mouthUpperUpLeft","mouthUpperUpRight","browDownLeft","browDownRight","browInnerUp","browOuterUpLeft","browOuterUpRight","cheekPuff","cheekSquintLeft","cheekSquintRight","noseSneerLeft","noseSneerRight","tongueOut"} |
+ (int)releaseSdk | Releases resources. Call this API when you no longer need the capability. |
// Initialize the Audio-to-Expression SDKNSString *path = [[NSBundle mainBundle] pathForResource:@"audio2exp" ofType:@"bundle"];int ret = [Audio2ExpApi initWithModelPath:path];// Convert audio to blendshape dataNSArray *emotionArray = [Audio2ExpApi parseAudio:floatArr];// Release the SDK[Audio2ExpApi releaseSdk];// Use with the Tencent Effect SDK// Initialize the SDKself.beautyKit = [[XMagic alloc] initWithRenderSize:previewSize assetsDict:assetsDict];// Load the avatar materials[self.beautyKit loadAvatar:bundlePath exportedAvatar:nil completion:nil];// Pass the blendshape data to the SDK, and the effects will be applied.[self.beautyKit updateAvatarByExpression:emotionArray];
TXCAudioRecorder
.VoiceViewController
and the related classes.
Was this page helpful?