tencent cloud

Feedback

Last updated: 2024-07-18 14:26:14

    Business Process

    This section summarizes some common business processes in online karaoke, helping you better understand the implementation process of the entire scenario.
    Song request process
    Solo singing process
    Lead singer process
    Chorus process
    Audience process
    The following figure shows the process of requesting songs from a music repository on the business side and playing them using the TRTC SDK.
    
    
    
    The following figure shows the process of a solo singing turn-taking game, that is, the performer enters a room to perform, stops performing, and exits the room.
    
    
    
    The following figure shows the process of a real-time chorus game, that is, the lead singer initiates a chorus, stops the chorus, and exits the room.
    
    The following figure shows the process of a real-time chorus game, that is, the chorus members join the chorus, stop the chorus, and exit the room.
    
    
    
    The following figure shows the process of an online karaoke scenario, that is, the audience enters the room to listen to songs and synchronizes lyrics.
    
    
    

    Integration Preparation

    Step 1. Activating the service.

    The online karaoke scenarios usually require two paid PaaS services from Tencent Cloud: Tencent Real-Time Communication (TRTC) and Intelligent Music Solution for construction. TRTC is responsible for providing real-time audio and video interaction capabilities. Intelligent Music Solution is responsible for providing lyric recognition, smart composition, music recognition, and music scoring capabilities.
    Activate TRTC service.
    Activate the Intelligent Music service.
    1. First, you need to log in to the Tencent Real-Time Communication (TRTC) console to create an application. You can choose to upgrade the TRTC application version according to your needs. For example, the professional edition unlocks more value-added feature services.
    
    
    
    Note:
    It is recommended to create two applications for testing and production environments, respectively. Each Tencent Cloud account (UIN) is given 10,000 minutes of free duration every month for one year.
    TRTC offers monthly subscription plans including the experience edition (default), basic edition, and professional edition. Different value-added feature services can be unlocked. For details, see Version Features and Monthly Subscription Plan Instructions.
    2. After an application is created, you can see the basic information of the application in the Application Management - Application Overview section. It is important to keep the SDKAppID and SDKSecretKey safe for later use and to avoid key leakage that could lead to traffic theft.
    
    
    

    Preparation

    1. Go to the Purchase Page to activate the music service, and choose the appropriate features such as music scoring to activate.
    2. Create an AK/SK Key Pair in CAM (namely, a programmable access user that does not require log-in or any user permissions).
    3. Create a COS Bucket, and in the COS Bucket Management interface, authorize the read and write permissions of the COS Bucket to the created programmable access user.
    4. Prepare the parameters.
    operateUin: Tencent Cloud sub-user's account ID.
    cosConfig: COS related parameters.
    secretId: Bucket's secretId.
    secretKey: Bucket's secretKey.
    bucket: Bucket's name.
    region: Bucket's region, for example, ap-guangzhou.

    Activation and registration.

    After the preparation is completed, proceed with registration activation by initiating a request, with an estimated wait time of about 2 minutes.
    Initiate request.
    Request result:
    curl -X POST \\
    http://service-mqk0mc83-1257411467.bj.apigw.tencentcs.com/release/register \\
    -H 'Content-Type: application/json' \\
    -H 'Cache-control: no-cache' \\
    -d '{
    "requestId": "test-regisiter-service",
    "action": "Register",
    "registerRequest": {
    "operateUin": <operateUin>,
    "userName": <customedName>,
    "cosConfig": {
    "secretId": <CosConfig.secretId>,
    "secretKey": <CosConfig.secretKey>,
    "bucket": <CosConfig.bucket>,
    "region": <CosConfig.region>
    }
    }
    }'
    {
    "requestId": "test-regisiter-service",
    "registerInfo": {
    "tmpContentId": <tmpContentId>,
    "tmpSecretId": <tmpSecretId>,
    "tmpSecretKey": <tmpSecretKey>,
    "apiGateSecretId": <apiGateSecretId>,
    "apiGateSecretKey": <apiGateSecretKey>,
    "demoCosPath": "UIN_demo/run_musicBeat.py",
    "usageDescription": "Download the python version demo file [UIN_demo/run_musicBeat.py] from the COS bucket [CosConfig.bucket], replace the input file in the demo, and then execute python run_musicBeat.py",
    "message": "Registration successful, and thank you for registering.",
    "createdAt": <createdAt>,
    "updatedAt": <updatedAt>
    }
    }

    Run verification.

    After the above activation and registration service are completed, a python version executable demo example based on music beat recognition capability will be generated in the demoCosPath directory. Execute the command python run_musicBeat.py in a networked environment for verification.
    Note:
    For more detailed intelligent music solution integration instructions, see Integration Guide.

    Step 2: Importing SDK.

    The TRTC SDK is now available on CocoaPods. We recommend integrating the SDK via CocoaPods.
    1. Install CocoaPods.
    Enter the following command in a terminal window (you need to install Ruby on your Mac first):
    sudo gem install cocoapods
    2. Create a Podfile.
    Go to the project directory, and enter the following command. A Podfile file will then be created in the project directory.
    pod init
    3. Edit the Podfile.
    Choose the appropriate version for your project and edit the Podfile.
    platform :ios, '8.0'
    target 'App' do
    # TRTC Lite Edition
    # The installation package has the minimum incremental size. But it only supports two features of Real-Time Communication (TRTC) and TXLivePlayer for live streaming playback.
    pod 'TXLiteAVSDK_TRTC', :podspec => 'https://liteav.sdk.qcloud.com/pod/liteavsdkspec/TXLiteAVSDK_TRTC.podspec'
    # Pro Edition
    # Includes a wide range of features such as Real-Time Communication (TRTC), TXLivePlayer for live streaming playback, TXLivePusher for RTMP push streams, TXVodPlayer for on-demand playback, and UGSV for short video recording and editing.
    # pod 'TXLiteAVSDK_Professional', :podspec => 'https://liteav.sdk.qcloud.com/pod/liteavsdkspec/TXLiteAVSDK_Professional.podspec'
    
    end
    4. Update and install the SDK.
    Enter the following command in a terminal window to update the local repository files and install the SDK.
    pod install
    Or use the following command to update the local repository.
    pod update
    Upon the completion of pod command execution, an .xcworkspace project file integrated with the SDK will be generated. Double-click to open it.
    Note:
    If the pod search fails, it is recommended to try updating the local repo cache of pod. The update command is as follows.
    pod setup
    pod repo update
    rm ~/Library/Caches/CocoaPods/search_index.json
    Besides CocoaPods integration, you can also choose to download the SDK and manually import it. For details, see Manually Integrating the TRTC SDK.

    Step 3: Project configuration.

    1. In karaoke scenarios, the TRTC SDK needs to be authorized for microphone permissions. Add the following content to your app's Info.plist. It corresponds to the system's prompt message when microphone permission is requested.
    Privacy - Microphone Usage Description. Also enter a prompt specifying the purpose of microphone use.
    
    
    
    2. If you need your App to continue running certain features in the background, go to XCode. Choose your current project. Under Capabilities, set the settings for Background Modes to ON, and check Audio, AirPlay, and Picture in Picture, as shown below:
    
    
    

    Step 4: Authentication and authorization.

    UserSig is a security protection signature designed by Tencent Cloud to prevent malicious attackers from misappropriating your cloud service usage rights. TRTC validates this authentication credential when it enters the room.
    Debugging Stage: UserSig can be generated through two methods for debugging and testing purposes only: client sample code and console access.
    Formal Operation Stage: It is recommended to use a higher security level server computation for generating UserSig. This is to prevent key leakage due to client reverse engineering.
    The specific implementation process is as follows:
    1. Before calling the SDK's initialization function, your app must first request UserSig from your server.
    2. Your server computes the UserSig based on the SDKAppID and UserID.
    3. The server returns the computed UserSig to your app.
    4. Your app passes the obtained UserSig into the SDK through a specific API.
    5. The SDK submits the SDKAppID + UserID + UserSig to Tencent Cloud CVM for verification.
    6. Tencent Cloud verifies the UserSig and confirms its validity.
    7. After the verification is passed, real-time audio and video services will be provided to the TRTC SDK.
    
    
    
    Note:
    The local computation method of UserSig during the debugging stage is not recommended for application in an online environment. It is prone to reverse engineering, leading to key leakage.
    We provide server computation source code for UserSig in multiple programming languages (Java/GO/PHP/Nodejs/Python/C#/C++). For details, see Server Computation of UserSig.

    Step 5: Initializing the SDK.

    // Create TRTC SDK instance (Single Instance Pattern).
    self.trtcCloud = [TRTCCloud sharedInstance];
    // Set event listeners.
    self.trtcCloud.delegate = self;
    
    // Notifications from various SDK events (e.g., error codes, warning codes, audio and video status parameters, etc.).
    - (void)onError:(TXLiteAVError)errCode errMsg:(nullable NSString *)errMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", errCode, errMsg);
    }
    
    - (void)onWarning:(TXLiteAVWarning)warningCode warningMsg:(nullable NSString *)warningMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", warningCode, warningMsg);
    }
    
    // Remove event listener.
    self.trtcCloud.delegate = nil;
    // Terminate TRTC SDK instance (Singleton Pattern).
    [TRTCCloud destroySharedIntance];
    Note:
    It is recommended to listen to SDK event notifications. Perform log printing and handling for some common errors. For details, see Error Code Table.

    Scenario 1: Solo singing turn-taking

    Perspective 1: Performer actions

    Sequence diagram

    
    
    
    1. Enter the room.
    - (void)enterRoomWithRoomId:(NSString *)roomId userID:(NSString *)userId {
    TRTCParams *params = [[TRTCParams alloc] init];
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = [self generateUserSig:userId];
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    Note:
    To better transmit SEI messages for lyric synchronization, it is recommended to choose TRTCAppSceneLIVE for room entry scenarios.
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    NSLog(@"Enter room succeed!");
    // Enable the experimental API for black frame insertion.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":true}}"];
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed!");
    }
    }
    Note:
    Under the pure audio mode, the performer needs to enable the insertion of black frames to carry SEI messages. This API should be called after successfully entering the room.
    2. Go live on streams.
    // Switched to the anchor role.
    [self.trtcCloud switchRole:TRTCRoleAnchor];
    
    // Event callback for switching the role.
    - (void)onSwitchRole:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
    if (errCode == ERR_NULL) {
    // Set media volume type.
    [self.trtcCloud setSystemVolumeType:TRTCSystemVolumeTypeMedia];
    // Upstream local audio streams and set audio quality.
    [self.trtcCloud startLocalAudio:TRTCAudioQualityMusic];
    }
    }
    Note:
    In karaoke scenarios, it is recommended to set the full-range media volume and music quality to achieve a high-fidelity listening experience.
    3. Song selection and performance.
    Search for songs, and obtain music resources.
    Search for songs and acquire music resources through the business backend. Obtain identifiers such as the MusicId, the song's URL (MusicUrl), and the lyrics URL (LyricsUrl).
    It is recommended that the business side select an appropriate music repository production to provide licensed music resources.
    Play accompaniment and start singing.
    // Obtain audio effects management.
    self.audioEffectManager = [self.trtcCloud getAudioEffectManager];
    
    // originMusicId: Custom identifier for the original vocal music. originMusicUrl: URL of the original vocal music resource.
    TXAudioMusicParam *originMusicParam = [[TXAudioMusicParam alloc] init];
    originMusicParam.ID = originMusicId;
    originMusicParam.path = originMusicUrl;
    // Whether to publish the original vocal music to remote (otherwise play locally only).
    originMusicParam.publish = YES;
    
    // accompMusicId: Custom identifier for the accompaniment music. accompMusicUrl: URL of the accompaniment music resource.
    TXAudioMusicParam *accompMusicParam = [[TXAudioMusicParam alloc] init];
    accompMusicParam.ID = accompMusicId;
    accompMusicParam.path = accompMusicUrl;
    // Whether to publish the accompaniment to remote (otherwise play locally only).
    accompMusicParam.publish = YES;
    
    // Start playing the original vocal music.
    [self.audioEffectManager startPlayMusic:originMusicParam onStart:^(NSInteger errCode) {
    // onStart
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // onProgress
    } onComplete:^(NSInteger errCode) {
    // onComplete
    }];
    
    // Start playing the accompaniment music.
    [self.audioEffectManager startPlayMusic:originMusicParam onStart:^(NSInteger errCode) {
    // onStart
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // onProgress
    } onComplete:^(NSInteger errCode) {
    // onComplete
    }];
    
    // Switch to the original vocal music.
    [self.audioEffectManager setMusicPlayoutVolume:originMusicId volume:100];
    [self.audioEffectManager setMusicPublishVolume:originMusicId volume:100];
    [self.audioEffectManager setMusicPlayoutVolume:accompMusicId volume:0];
    [self.audioEffectManager setMusicPublishVolume:accompMusicId volume:0];
    
    // Switch to the accompaniment music.
    [self.audioEffectManager setMusicPlayoutVolume:originMusicId volume:0];
    [self.audioEffectManager setMusicPublishVolume:originMusicId volume:0];
    [self.audioEffectManager setMusicPlayoutVolume:accompMusicId volume:100];
    [self.audioEffectManager setMusicPublishVolume:accompMusicId volume:100];
    Note:
    In karaoke scenarios, both the original vocal and accompaniment need to be played simultaneously (distinguished by MusicID). The switch between the original vocal and accompaniment is achieved by adjusting the local and remote playback volumes.
    If the music being played has dual audio tracks (including both the original vocal and accompaniment), switching between them can be achieved by specifying the music's playback track using setMusicTrack.
    4. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Synchronize local lyrics, and transmit song progress via SEI.
    [self.audioEffectManager startPlayMusic:musicParam onStart:^(NSInteger errCode) {
    // Start playing music.
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // Determine whether seek is needed based on the latest progress and the local lyrics progress deviation.
    // Song progress is transmitted by sending an SEI message.
    NSDictionary *dic = @{
    @"musicId": @(self.musicId),
    @"progress": @(progressMs),
    @"duration": @(durationMs),
    };
    JSONModel *json = [[JSONModel alloc] initWithDictionary:dic error:nil];
    [self.trtcCloud sendSEIMsg:json.toJSONData repeatCount:1];
    } onComplete:^(NSInteger errCode) {
    // Music playback completed.
    }];
    Note:
    The frequency of the SEI messages sent by the performer is determined by the event callback frequency. Also, the playback progress can be actively synchronized on a schedule through getMusicCurrentPosInMS.
    5. Become a listener and exit the room.
    // Switched to the audience role.
    [self.trtcCloud switchRole:TRTCRoleAudience];
    
    // Event callback for switching the role.
    - (void)onSwitchRole:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
    if (errCode == ERR_NULL) {
    // Stop playing accompaniment music.
    [[self.trtcCloud getAudioEffectManager] stopPlayMusic:self.musicId];
    // Stop local audio capture and publishing.
    [self.trtcCloud stopLocalAudio];
    }
    }
    
    // Exit the room.
    [self.trtcCloud exitRoom];
    
    // Exit room event callback.
    - (void)onExitRoom:(NSInteger)reason {
    if (reason == 0) {
    NSLog(@"Proactively call exitRoom to exit the room.");
    } else if (reason == 1) {
    NSLog(@"Removed from the current room by the server.");
    } else if (reason == 2) {
    NSLog(@"The current room is dissolved.");
    }
    }
    Note:
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.
    If you want to call enterRoom again or switch to another audio and video SDK, wait for the onExitRoom callback before proceeding. Otherwise, you may encounter various exceptional issues such as the camera, microphone device being forcibly occupied.

    Perspective 2: Listener actions

    Sequence diagram

    
    
    
    1. Enter the room.
    // Enter the room.
    - (void)enterRoomWithRoomId:(NSString *)roomId userID:(NSString *)userId {
    TRTCParams *params = [[TRTCParams alloc] init];
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = [self generateUserSig:userId];
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    NSLog(@"Enter room succeed!");
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed!");
    }
    }
    Note:
    To better transmit SEI messages for lyric synchronization, it is recommended to choose TRTCAppSceneLIVE for room entry scenarios.
    Under the automatic subscription mode (default), audiences automatically subscribe and play the on-mic anchor's audio and video streams upon entering the room.
    2. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Listener end lyric synchronization
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
    if (available) {
    [self.trtcCloud startRemoteView:userId view:nil];
    } else {
    [self.trtcCloud stopRemoteView:userId];
    }
    }
    
    - (void)onRecvSEIMsg:(NSString *)userId message:(NSData *)message {
    JSONModel *json = [[JSONModel alloc] initWithData:message error:nil];
    NSDictionary *dic = json.toDictionary;
    int32_t musicId = [dic[@"musicId"] intValue];
    NSInteger progress = [dic[@"progress"] integerValue];
    NSInteger duration = [dic[@"duration"] integerValue];
    // ......
    // TODO: The logic of updating the lyric control.
    // Based on the received latest progress and the local lyrics progress deviation, determine whether a lyric control seek is necessary.
    // ......
    }
    Note:
    Listeners need to actively subscribe to the performer's video streams in order to receive the SEI messages carried by black frames.
    3. Exit the room.
    // Exit the room.
    [self.trtcCloud exitRoom];
    
    // Exit room event callback.
    - (void)onExitRoom:(NSInteger)reason {
    if (reason == 0) {
    NSLog(@"Proactively call exitRoom to exit the room.");
    } else if (reason == 1) {
    NSLog(@"Removed from the current room by the server.");
    } else if (reason == 2) {
    NSLog(@"The current room is dissolved.");
    }
    }

    Scenario 2: Real-time chorus

    Perspective 1: Lead singer actions

    Sequence diagram

    
    
    
    1. Dual instances enter the room.
    - (void)enterRoomWithRoomId:(NSString *)roomId userID:(NSString *)userId {
    // Create a TRTCCloud primary instance (vocal instance).
    TRTCCloud *mainCloud = [TRTCCloud sharedInstance];
    // Create a TRTCCloud sub-instance (music instance).
    TRTCCloud *subCloud = [mainCloud createSubCloud];
    // The primary instance (vocal instance) enters the room.
    TRTCParams *params = [[TRTCParams alloc] init];
    params.strRoomId = roomId;
    params.userId = userId;
    params.userSig = userSig;
    params.sdkAppId = SDKAppID;
    params.role = TRTCRoleAnchor;
    [mainCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    // The sub-instance enables manual subscription mode. By default it does not subscribe to remote streams.
    [subCloud setDefaultStreamRecvMode:NO video:NO];
    // The sub-instance (music instance) enters the room.
    TRTCParams *bgmParams = [[TRTCParams alloc] init];
    bgmParams.strRoomId = roomId;
    // The sub-instance username must not duplicate with other users in the room.
    bgmParams.userId = [userId stringByAppendingString:@"_bgm"];
    bgmParams.userSig = userSig;
    bgmParams.sdkAppId = SDKAppID;
    bgmParams.role = TRTCRoleAnchor;
    [subCloud enterRoom:bgmParams appScene:TRTCAppSceneLIVE];
    }
    Note:
    In a real-time chorus solution, the lead singer end must create primary instance and sub-instance for upstream voice and accompaniment music, respectively.
    Sub-instances do not need to subscribe to other users' audio streams in the room. Therefore, it is recommended to enable manual subscription mode, and it must be activated before entering the room.
    2. Set the settings after entering the room.
    // Event callback for the result of primary instance entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // The primary instance unsubscribe from music streams published by sub-instances.
    [self.trtcCloud muteRemoteAudio:[self.userId stringByAppendingString:@"_bgm"] mute:YES];
    // The primary instance uses the experimental API to enable black frame insertion.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":true}}"];
    // The primary instance uses the experimental API to enable chorus mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":0}}"];
    // The primary instance uses the experimental API to enable low-latency mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}"];
    // The primary instance enables volume level callback.
    TRTCAudioVolumeEvaluateParams *aveParams = [[TRTCAudioVolumeEvaluateParams alloc] init];
    aveParams.interval = 300;
    [self.trtcCloud enableAudioVolumeEvaluation:YES withParams:aveParams];
    // The primary instance sets the global media volume type.
    [self.trtcCloud setSystemVolumeType:TRTCSystemVolumeTypeMedia];
    // The primary instance captures and publishes local audio, and sets audio quality.
    [self.trtcCloud startLocalAudio:TRTCAudioQualityMusic];
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed");
    }
    }
    
    // Event callback for the result of sub-instance entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // The sub-instance uses the experimental API to enable chorus mode.
    [self.subCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":1}}"];
    // The sub-instance uses the experimental API to enable low-latency mode.
    [self.subCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}"];
    // The sub-instance sets global media volume type.
    [self.subCloud setSystemVolumeType:TRTCSystemVolumeTypeMedia];
    // The sub-instance sets audio quality.
    [self.subCloud setAudioQuality:TRTCAudioQualityMusic];
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed");
    }
    }
    Note:
    Both the primary instance and sub-instance must use the experimental APIs to enable chorus mode and low-latency mode to optimize the chorus experience. Note the difference in the audioSource parameter.
    3. Push the mixed stream back to the room.
    - (void)startPublishMediaToRoomWithRoomId:(NSString *)roomId userId:(NSString *)userId {
    // Create TRTCPublishTarget object.
    TRTCPublishTarget *target = [[TRTCPublishTarget alloc] init];
    // After mixing, the stream is relayed back to the room.
    target.mode = TRTCPublishMixStreamToRoom;
    TRTCUser *mixStreamIdentity = [[TRTCUser alloc] init];
    mixStreamIdentity.strRoomId = roomId;
    // The mixing stream robot's username must not duplicate with other users in the room.
    mixStreamIdentity.userId = [userId stringByAppendingString:@"_robot"];
    target.mixStreamIdentity = mixStreamIdentity;
    // Set the encoding parameters of the transcoded audio stream (can be customized).
    TRTCStreamEncoderParam *encoderParam = [[TRTCStreamEncoderParam alloc] init];
    encoderParam.audioEncodedChannelNum = 2;
    encoderParam.audioEncodedKbps = 64;
    encoderParam.audioEncodedCodecType = 2;
    encoderParam.audioEncodedSampleRate = 48000;
    // Set the encoding parameters of the transcoded video stream (black frame mixing required).
    encoderParam.videoEncodedFPS = 15;
    encoderParam.videoEncodedGOP = 3;
    encoderParam.videoEncodedKbps = 30;
    encoderParam.videoEncodedWidth = 64;
    encoderParam.videoEncodedHeight = 64;
    // Set audio mixing parameters.
    TRTCStreamMixingConfig *mixingConfig = [[TRTCStreamMixingConfig alloc] init];
    // By default, leave this field empty. It indicates that all audio in the room will be mixed.
    mixingConfig.audioMixUserList = nil;
    // Configure video mixed-stream template (black frame mixing required).
    TRTCVideoLayout *layout = [[TRTCVideoLayout alloc] init];
    mixingConfig.videoLayoutList = @[layout];
    // Start mixing and pushing back.
    [self.trtcCloud startPublishMediaStream:target encoderParam:encoderParam mixingConfig:mixingConfig];
    }
    Note:
    To maintain alignment between chorus vocals and accompaniment music, it is recommended to enable pushing the mixed stream back to the room. The on-mic chorus members mutually subscribe to single streams, and off-mic audiences by default only subscribe to mixed streams.
    The mixing stream robot, acting as an independent user, enters the room to pull, mix, and push streams. Its username must not duplicate with other usernames in the room. Otherwise, it may lead to mutual deletion from the room.
    4. Search for and request songs.
    Search for songs and acquire music resources through the business backend. Obtain identifiers such as the MusicId, the song's URL (MusicUrl), and the lyrics URL (LyricsUrl).
    It is recommended that the business side select an appropriate music repository production to provide licensed music resources.
    5. NTP synchronization.
    - (void)updateNetworkTimeExample {
    [TXLiveBase sharedInstance].delegate = self;
    [TXLiveBase updateNetworkTime];
    }
    
    - (void)onUpdateNetworkTime:(int)errCode message:(NSString *)errMsg {
    // errCode 0: Time synchronization successful and deviation within 30 ms. 1: Time synchronization successful but deviation possibly above 30 ms. -1: Time synchronization failed.
    if (errCode == 0) {
    // Time synchronization successful and NTP timestamp obtained.
    NSInteger ntpTime = [TXLiveBase getNetworkTimestamp];
    } else {
    NSLog(@"Time synchronization failed, and you can try re-synchronization.");
    }
    }
    Note:
    NTP time synchronization results can reflect the current network quality of the application user. To ensure a good chorus experience, it is recommended not to allow users to initiate chorus if time synchronization fails.
    6. Send chorus signaling.
    - (void)sendChorusSignalExample {
    __weak typeof(self) weakSelf = self;
    NSTimer *timer = [NSTimer timerWithTimeInterval:1.0 repeats:YES block:^(NSTimer * _Nonnull timer) {
    __strong typeof(weakSelf) strongSelf = weakSelf;
    NSDictionary *dic = @{
    @"cmd": @"startChorus",
    // Agreed chorus start time: Current NTP time + delayed playback time (for example, 3 seconds).
    @"startPlayMusicTS": @([TXLiveBase getNetworkTimestamp] + 3000),
    @"musicId": @(self.musicId),
    @"musicDuration": @([[strongSelf.subCloud getAudioEffectManager] getMusicDurationInMS:strongSelf.originMusicUri]),
    };
    JSONModel *json = [[JSONModel alloc] initWithDictionary:dic error:nil];
    [strongSelf.trtcCloud sendCustomCmdMsg:1 data:json.toJSONData reliable:NO ordered:NO];
    }];
    [[NSRunLoop currentRunLoop] addTimer:timer forMode:NSRunLoopCommonModes];
    }
    Note:
    The lead singer needs to cyclically broadcast chorus signaling to the room at a fixed time interval (e.g., every 1 second), so that new users who join mid-session can also participate in the chorus.
    7. Load and play accompaniment.
    // Obtain audio effects management.
    TXAudioEffectManager *audioEffectManager = [self.subCloud getAudioEffectManager];
    
    // originMusicId: Custom identifier for the original vocal music. originMusicUrl: URL of the original vocal music resource.
    TXAudioMusicParam *originMusicParam = [[TXAudioMusicParam alloc] init];
    originMusicParam.ID = originMusicId;
    originMusicParam.path = originMusicUrl;
    // Whether to publish the original vocal music to remote (otherwise play locally only).
    originMusicParam.publish = YES;
    // Music start playing time point (in milliseconds).
    originMusicParam.startTimeMS = 0;
    
    // accompMusicId: Custom identifier for the accompaniment music. accompMusicUrl: URL of the accompaniment music resource.
    TXAudioMusicParam *accompMusicParam = [[TXAudioMusicParam alloc] init];
    accompMusicParam.ID = accompMusicId;
    accompMusicParam.path = accompMusicUrl;
    // Whether to publish the accompaniment to remote (otherwise play locally only).
    accompMusicParam.publish = YES;
    // Music start playing time point (in milliseconds).
    accompMusicParam.startTimeMS = 0;
    
    // Preload the original vocal music.
    [audioEffectManager preloadMusic:originMusicParam onProgress:nil onError:nil];
    // Preload the accompaniment music.
    [audioEffectManager preloadMusic:accompMusicParam onProgress:nil onError:nil];
    
    // Start playing the original vocal music after a delayed playback time (for example, 3 seconds).
    [self.audioEffectManager startPlayMusic:originMusicParam onStart:^(NSInteger errCode) {
    // onStart
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // onProgress
    } onComplete:^(NSInteger errCode) {
    // onComplete
    }];
    
    // Start playing the accompaniment music after a delayed playback time (for example, 3 seconds).
    [self.audioEffectManager startPlayMusic:originMusicParam onStart:^(NSInteger errCode) {
    // onStart
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // onProgress
    } onComplete:^(NSInteger errCode) {
    // onComplete
    }];
    
    // Switch to the original vocal music.
    [self.audioEffectManager setMusicPlayoutVolume:originMusicId volume:100];
    [self.audioEffectManager setMusicPublishVolume:originMusicId volume:100];
    [self.audioEffectManager setMusicPlayoutVolume:accompMusicId volume:0];
    [self.audioEffectManager setMusicPublishVolume:accompMusicId volume:0];
    
    // Switch to the accompaniment music.
    [self.audioEffectManager setMusicPlayoutVolume:originMusicId volume:0];
    [self.audioEffectManager setMusicPublishVolume:originMusicId volume:0];
    [self.audioEffectManager setMusicPlayoutVolume:accompMusicId volume:100];
    [self.audioEffectManager setMusicPublishVolume:accompMusicId volume:100];
    Note:
    It is recommended to preload music before starting playback. By loading music resources into memory in advance, you can effectively reduce the load delay of music playback.
    In karaoke scenarios, both the original vocal and accompaniment need to be played simultaneously (distinguished by MusicID). The switch between the original vocal and accompaniment is achieved by adjusting the local and remote playback volumes.
    If the music being played has dual audio tracks (including both the original vocal and accompaniment), switching between them can be achieved by specifying the music's playback track using setMusicTrack.
    8. Accompaniment Synchronization
    // Agreed chorus start time.
    @property (nonatomic, assign) NSInteger startPlayMusicTS;
    
    - (void)syncBgmExample {
    // Actual playback progress of the current accompaniment music.
    NSInteger currentProgress = [[self.subCloud getAudioEffectManager] getMusicCurrentPosInMS:self.musicId];
    // Ideal playback progress of the current accompaniment music.
    NSInteger estimatedProgress = [TXLiveBase getNetworkTimestamp] - self.startPlayMusicTS;
    // When the progress difference exceeds 50 ms, corrections are made.
    if (estimatedProgress >= 0 && labs(currentProgress - estimatedProgress) > 50) {
    [[self.subCloud getAudioEffectManager] seekMusicToPosInMS:self.musicId pts:estimatedProgress];
    }
    }
    9. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Synchronize local lyrics, and transmit song progress via SEI.
    [[self.subCloud getAudioEffectManager] startPlayMusic:musicParam onStart:^(NSInteger errCode) {
    // Start playing music.
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // Determine whether seek is needed based on the latest progress and the local lyrics progress deviation.
    // Song progress is transmitted by sending an SEI message.
    NSDictionary *dic = @{
    @"musicId": @(self.musicId),
    @"progress": @(progressMs),
    @"duration": @(durationMs),
    };
    JSONModel *json = [[JSONModel alloc] initWithDictionary:dic error:nil];
    [self.trtcCloud sendSEIMsg:json.toJSONData repeatCount:1];
    } onComplete:^(NSInteger errCode) {
    // Music playback completed.
    }];
    Note:
    The frequency of the SEI messages sent by the performer is determined by the event callback frequency. Also, the playback progress can be actively synchronized on a schedule through getMusicCurrentPosInMS.
    10. Become a listener and exit the room.
    - (void)exitRoomExample {
    // The sub-instance uses the experimental API to disable chorus mode.
    [self.subCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":1}}"];
    // The sub-instance uses the experimental API to disable low-latency mode.
    [self.subCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}"];
    // The sub-instance switches to the audience role.
    [self.subCloud switchRole:TRTCRoleAudience];
    // The sub-instance stops playing accompaniment music.
    [[self.subCloud getAudioEffectManager] stopPlayMusic:self.musicId];
    // The sub-instance exits the room.
    [self.subCloud exitRoom];
    // The primary instance uses the experimental API to disable black frame insertion.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":false}}"];
    // The primary instance uses the experimental API to disable chorus mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":0}}"];
    // The primary instance uses the experimental API to disable low-latency mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}"];
    // The primary instance switches to the audience role.
    [self.trtcCloud switchRole:TRTCRoleAudience];
    // The primary instance stops local audio capture and publishing.
    [self.trtcCloud stopLocalAudio];
    // The primary instance exits the room.
    [self.trtcCloud exitRoom];
    }

    Perspective 2: Chorus actions

    Sequence diagram

    
    
    
    1. Enter the room.
    - (void)enterRoomWithRoomId:(NSString *)roomId userID:(NSString *)userId {
    TRTCParams *params = [[TRTCParams alloc] init];
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = [self generateUserSig:userId];
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // Example of entering the room as an audience role.
    params.role = TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    NSLog(@"Enter room succeed!");
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed!");
    }
    }
    2. Go live on streams.
    // Switched to the anchor role.
    [self.trtcCloud switchRole:TRTCRoleAnchor];
    
    // Event callback for switching the role.
    - (void)onSwitchRole:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
    if (errCode == ERR_NULL) {
    // Cancel subscription to music streams published by the lead singer sub-instance.
    [self.trtcCloud muteRemoteAudio:self.bgmUserId mute:YES];
    // Use the experimental API to enable chorus mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":0}}"];
    // Use the experimental API to enable low-latency mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}"];
    // Set media volume type.
    [self.trtcCloud setSystemVolumeType:TRTCSystemVolumeTypeMedia];
    // Upstream local audio streams and set audio quality.
    [self.trtcCloud startLocalAudio:TRTCAudioQualityMusic];
    }
    }
    Note:
    To minimize delay, all chorus members play the accompaniment music locally. Therefore, it is necessary to cancel subscriptions to music streams published by the lead singer.
    Chorus members also need to use the experimental API to enable chorus mode and low-latency mode to optimize the chorus experience.
    In karaoke scenarios, it is recommended to set the full-range media volume and music quality to achieve a high-fidelity listening experience.
    3. NTP synchronization.
    - (void)updateNetworkTimeExample {
    [TXLiveBase sharedInstance].delegate = self;
    [TXLiveBase updateNetworkTime];
    }
    
    - (void)onUpdateNetworkTime:(int)errCode message:(NSString *)errMsg {
    // errCode 0: Time synchronization successful and deviation within 30 ms. 1: Time synchronization successful but deviation possibly above 30 ms. -1: Time synchronization failed.
    if (errCode == 0) {
    // Time synchronization successful and NTP timestamp obtained.
    NSInteger ntpTime = [TXLiveBase getNetworkTimestamp];
    } else {
    NSLog(@"Time synchronization failed, and you can try re-synchronization.");
    }
    }
    Note:
    NTP time synchronization results can reflect the current network quality of the application user. To ensure a good chorus experience, it is recommended not to allow users to participate in the chorus if time synchronization fails.
    4. Receive chorus signaling.
    - (void)onRecvCustomCmdMsgUserId:(NSString *)userId cmdID:(NSInteger)cmdID seq:(UInt32)seq message:(NSData *)message {
    JSONModel *json = [[JSONModel alloc] initWithData:message error:nil];
    NSDictionary *dic = json.toDictionary;
    // Match the chorus signaling.
    if ([dic[@"cmd"] isEqualToString:@"startChorus"]) {
    self.startPlayMusicTS = [dic[@"startPlayMusicTS"] integerValue];
    self.musicId = [dic[@"musicId"] intValue];
    self.musicDuration = [dic[@"musicDuration"] intValue];
    // Agree on the time difference between chorus time and current time.
    self.delayMs = self.startPlayMusicTS - [TXLiveBase getNetworkTimestamp];
    }
    }
    Note:
    Once the chorus members receive the chorus signaling and join in, the status should be changed to Chorus In Progress. Chorus signaling would not be responded to again before the end of this chorus round.
    5. Play accompaniment, and start chorus.
    - (void)playBmgExample {
    // Chorus has not started.
    if (self.delayMs > 0) {
    // Begin to preload music.
    [self preloadMusicWithStartTimeMS:0];
    // Play music after a delay of delayMs.
    [self startPlayMusicWithStartTimeMS:0];
    } else if (labs(self.delayMs) < self.musicDuration) {
    // Chorus is in progress.
    // Play start time: Absolute value of the time difference + preload delay (e.g., 400 ms).
    NSInteger startTimeMS = labs(self.delayMs) + 400;
    // Begin to preload music.
    [self preloadMusicWithStartTimeMS:startTimeMS];
    // Start playing music after a preload delay (e.g., 400 ms).
    [self startPlayMusicWithStartTimeMS:startTimeMS];
    } else {
    // Chorus has ended.
    // Joining the chorus is not allowed.
    }
    }
    
    // Preload music.
    - (void)preloadMusicWithStartTimeMS:(NSInteger)startTimeMS {
    // musicId: Obtained from chorus signaling. musicUrl: Corresponding music resource URL.
    TXAudioMusicParam *musicParam = [[TXAudioMusicParam alloc] init];
    musicParam.ID = self.musicId;
    musicParam.path = self.musicUrl;
    // Only local music playback.
    musicParam.publish = NO;
    musicParam.startTimeMS = startTimeMS;
    [self.audioEffectManager preloadMusic:musicParam onProgress:nil onError:nil];
    }
    
    // Begin to play music.
    - (void)startPlayMusicWithStartTimeMS:(NSInteger)startTimeMS {
    // musicId: Obtained from chorus signaling. musicUrl: Corresponding music resource URL.
    TXAudioMusicParam *musicParam = [[TXAudioMusicParam alloc] init];
    musicParam.ID = self.musicId;
    musicParam.path = self.musicUrl;
    // Only local music playback.
    musicParam.publish = NO;
    musicParam.startTimeMS = startTimeMS;
    [self.audioEffectManager startPlayMusic:musicParam onStart:nil onProgress:nil onComplete:nil];
    }
    Note:
    To minimize transmission delay as much as possible, chorus members perform along with the local playback of accompaniment music, and they do not need to publish or receive remote music.
    Based on delayMs, the current chorus status can be determined. Developers must implement the startPlayMusic delayed call for different statuses on their own.
    6. Accompaniment Synchronization
    // Agreed chorus start time.
    @property (nonatomic, assign) NSInteger startPlayMusicTS;
    
    - (void)syncBgmExample {
    // Actual playback progress of the current accompaniment music.
    NSInteger currentProgress = [[self.trtcCloud getAudioEffectManager] getMusicCurrentPosInMS:self.musicId];
    // Ideal playback progress of the current accompaniment music.
    NSInteger estimatedProgress = [TXLiveBase getNetworkTimestamp] - self.startPlayMusicTS;
    // When the progress difference exceeds 50 ms, corrections are made.
    if (estimatedProgress >= 0 && labs(currentProgress - estimatedProgress) > 50) {
    [[self.trtcCloud getAudioEffectManager] seekMusicToPosInMS:self.musicId pts:estimatedProgress];
    }
    }
    7. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Local lyric synchronization.
    [self.audioEffectManager startPlayMusic:musicParam onStart:^(NSInteger errCode) {
    // Start playing music.
    } onProgress:^(NSInteger progressMs, NSInteger durationMs) {
    // TODO: The logic of updating the lyric control.
    // Determine whether seek in the lyrics control is needed based on the latest progress and the local lyrics progress deviation.
    } onComplete:^(NSInteger errCode) {
    // Music playback completed.
    }];
    8. Become a listener and exit the room.
    - (void)exitRoomExample {
    // Use the experimental API to disable chorus mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":0}}"];
    // Use the experimental API to disable low-latency mode.
    [self.trtcCloud callExperimentalAPI:@"{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}"];
    // Switched to the audience role.
    [self.trtcCloud switchRole:TRTCRoleAudience];
    // Stop playing accompaniment music.
    [[self.trtcCloud getAudioEffectManager] stopPlayMusic:self.musicId];
    // Stop local audio capture and publishing.
    [self.trtcCloud stopLocalAudio];
    // Exit the room.
    [self.trtcCloud exitRoom];
    }

    Perspective 3: Listener actions

    Sequence diagram

    
    
    
    1. Enter the room.
    - (void)enterRoomWithRoomId:(NSString *)roomId userID:(NSString *)userId {
    TRTCParams *params = [[TRTCParams alloc] init];
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = [self generateUserSig:userId];
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    NSLog(@"Enter room succeed!");
    } else {
    // result indicates the error code when you fail to enter the room.
    NSLog(@"Enter room failed!");
    }
    }
    Note:
    To better transmit SEI messages for lyric synchronization, it is recommended to choose TRTCAppSceneLIVE for room entry scenarios.
    Under the automatic subscription mode (default), audiences automatically subscribe and play the on-mic anchor's audio and video streams upon entering the room.
    2. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Listener end lyric synchronization
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
    if (available) {
    [self.trtcCloud startRemoteView:userId view:nil];
    } else {
    [self.trtcCloud stopRemoteView:userId];
    }
    }
    
    - (void)onRecvSEIMsg:(NSString *)userId message:(NSData *)message {
    JSONModel *json = [[JSONModel alloc] initWithData:message error:nil];
    NSDictionary *dic = json.toDictionary;
    int32_t musicId = [dic[@"musicId"] intValue];
    NSInteger progress = [dic[@"progress"] integerValue];
    NSInteger duration = [dic[@"duration"] integerValue];
    // ......
    // TODO: The logic of updating the lyric control.
    // Based on the received latest progress and the local lyrics progress deviation, determine whether a lyric control seek is necessary.
    // ......
    }
    Note:
    Listeners need to actively subscribe to the lead singer's video streams in order to receive the SEI messages carried by black frames.
    If the lead singer's mixed stream also mixes in black frames, then only subscribing to the mixing stream robot's video stream is required.
    3. Exit the room.
    // Exit the room.
    [self.trtcCloud exitRoom];
    
    // Exit room event callback.
    - (void)onExitRoom:(NSInteger)reason {
    if (reason == 0) {
    NSLog(@"Proactively call exitRoom to exit the room.");
    } else if (reason == 1) {
    NSLog(@"Removed from the current room by the server.");
    } else if (reason == 2) {
    NSLog(@"The current room is dissolved.");
    }
    }

    Advanced Features

    Music scoring module integration

    Music scoring provides users with multi-dimensional singing scoring capabilities. Currently, supported scoring dimensions include intonation and rhythm.
    1. Prepare scoring-related files.
    Prepare in advance the performance recording files to be scored, original music standard files, MIDI pitch files, and upload them to COS storage.
    2. Create a music scoring task.
    Request Method: POST(HTTP).
    Request Header: Content-Type: application/json.
    A request sample is as follows:
    Request sample:
    Response sample:
    {
    "action": "CreateJob",
    "secretId": "{secretId}",
    "secretKey": "{secretKey}",
    "createJobRequest": {
    "customId": "{customId}",
    "callback": "{callback}",
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    }
    }
    ]
    }
    }
    {
    "requestId": "ac004192-110b-46e3-ade8-4e449df84d60",
    "createJobResponse": {
    "job": {
    "id": "13f342e4-6866-450e-b44e-3151431c578b",
    "state": 1,
    "customId": "{customId}",
    "callback": "{callback}",
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    }
    }
    ],
    "timing": {
    "createdAt": "1603432763000",
    "startedAt": "0",
    "completedAt": "0"
    }
    }
    }
    }
    3. Obtain music scoring results.
    Obtain Method: Divided into active acquisition and passive callback.
    By querying with the ID obtained from the response packet after creating the task, if the queried task is successful (state=3), the task's Output will carry the smartContentResult structure, in which the vocalScore field stores the result JSON file name. Users can construct the output file's COS path based on the information in Output's COS and destination.
    Request sample:
    Response sample:
    {
    "action": "GetJob",
    "secretId": "{secretId}",
    "secretKey": "{secretKey}",
    "getJobRequest": {
    "id": "{id}"
    }
    }
    {
    "requestId": "c9845a99-34e3-4b0f-80f5-f0a2a0ee8896",
    "getJobResponse": {
    "job": {
    "id": "a95e9d74-6602-4405-a3fc-6408a76bcc98",
    "state": 3,
    "customId": "{customId}",
    "callback": "{callback}",
    "timing": {
    "createdAt": "1610513575000",
    "startedAt": "1610513575000",
    "completedAt": "1610513618000"
    },
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    },
    "smartContentResult": {
    "vocalScore": "out.json"
    }
    }
    ]
    }
    }
    }
    Passive callbacks need to fill in the callback field when creating a task. The platform will send the entire Job structure to the address specified by the callback after the task reaches the Completed state (COMPLETED/ERROR). It is recommended to obtain task results using passive callbacks. The entire Job structure of tasks that have reached the Completed state (COMPLETED/ERROR) will be sent to the address corresponding to the callback field specified when the task was created. See the active query sample for the Job structure (under getJobResponse).
    Note:
    For more detailed intelligent music solution integration instructions for the music scoring module, see Music Scoring Integration.

    Transparent transmission of single stream volume in mixed streams.

    After the mixed streaming is enabled, the audience cannot directly obtain the on-mic anchor's single stream volume. In order to transparently transmit the single stream volume, the room owner may employ SEI to transmit the callback volume values of all on-mic anchors.
    - (void)onUserVoiceVolume:(NSArray<TRTCVolumeInfo *> *)userVolumes totalVolume:(NSInteger)totalVolume {
    if (userVolumes.count) {
    // For storing volume values corresponding to on-mic users.
    NSMutableDictionary *volumesMap = [NSMutableDictionary dictionary];
    for (TRTCVolumeInfo *user in userVolumes) {
    // Can set an appropriate volume threshold.
    if (user.volume > 10) {
    volumesMap[user.userId] = @(user.volume);
    }
    }
    JSONModel *json = [[JSONModel alloc] initWithDictionary:volumesMap error:nil];
    // Transmit a collection of on-mic users' volume via SEI messages.
    [self.trtcCloud sendSEIMsg:json.toJSONData repeatCount:1];
    }
    }
    
    - (void)onRecvSEIMsg:(NSString *)userId message:(NSData *)message {
    JSONModel *json = [[JSONModel alloc] initWithData:message error:nil];
    NSDictionary *dic = json.toDictionary;
    for (NSString *userId in dic.allKeys) {
    // Print the volume levels of single streams of all on-mic users.
    NSLog(@"%@: %@", userId, dic[userId]);
    }
    }
    Note:
    The prerequisite for using SEI messages to transparently transmit single stream volume through a mixed stream is that the room owner must either be video streaming or have black frame insertion enabled and furthermore, the audiences must actively subscribe to the room owner's video stream.

    Real-time network quality callback

    You can listen to onNetworkQuality to real-time monitor the network quality of both local and remote users. This callback is thrown every 2 seconds.
    #pragma mark - TRTCCloudDelegate
    
    - (void)onNetworkQuality:(TRTCQualityInfo *)localQuality remoteQuality:(NSArray<TRTCQualityInfo *> *)remoteQuality {
    // localQuality represents the local user's network quality evaluation result.
    // remoteQuality represents the remote user's network quality evaluation result. The result is affected by both remote and local factors.
    switch(localQuality.quality) {
    case TRTCQuality_Unknown:
    NSLog(@"Undefined.");
    break;
    case TRTCQuality_Excellent:
    NSLog(@"The current network is excellent.");
    break;
    case TRTCQuality_Good:
    NSLog(@"The current network is good.");
    break;
    case TRTCQuality_Poor:
    NSLog(@"The current network is moderate.");
    break;
    case TRTCQuality_Bad:
    NSLog(@"The current network is poor.");
    break;
    case TRTCQuality_Vbad:
    NSLog(@"The current network is very poor.");
    break;
    case TRTCQuality_Down:
    NSLog(@"The current network does not meet the minimum requirements of TRTC.");
    break;
    default:
    break;
    }
    }

    Advanced permission control

    TRTC advanced permission control can be used to set different entry permissions for different rooms, such as advanced VIP rooms. It can also be used to control the permission for the audience to speak, such as handling ghost microphones.
    Step 1: Enable the Advanced Permission Control Switch in the TRTC console application's advanced features page.
    
    
    
    Note:
    Once advanced permission control is enabled for a certain SDKAppID, all users using that SDKAppID need to pass in the privateMapKey parameter in TRTCParams to successfully enter the room. Therefore, if you have users online using this SDKAppID, do not enable this feature.
    Step 2: Generate privateMapKey on the backend. For sample code, see privateMapKey computation source code.
    Step 3: Room entry verification & speaking permission verification with PrivateMapKey.
    Room entry verification
    TRTCParams *params = [[TRTCParams alloc] init];
    params.sdkAppId = SDKAppID;
    params.roomId = self.roomId;
    params.userId = self.userId;
    // UserSig obtained from the business backend.
    params.userSig = [self getUserSig];
    // PrivateMapKey obtained from the backend.
    params.privateMapKey = [self getPrivateMapKey];
    params.role = TRTCRoleAudience;
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    Speaking permission verification
    // Pass in the latest PrivateMapKey obtained from the backend into the role switching API.
    [self.trtcCloud switchRole:TRTCRoleAnchor privateMapKey:[self getPrivateMapKey]];

    Exception Handling

    Exception error handling

    When the TRTC SDK encounters an unrecoverable error, the error will be thrown in the onError callback. For details, see Error Code Table.
    1. UserSig related
    UserSig verification failure will lead to room-entering failure. You can use the UserSig tool for verification.
    Enumeration
    Value
    Description
    ERR_TRTC_INVALID_USER_SIG
    -3320
    Room entry parameter userSig is incorrect. Check if TRTCParams.userSig is empty.
    ERR_TRTC_USER_SIG_CHECK_FAILED
    -100018
    UserSig verification failed. Check if the parameter TRTCParams.userSig is filled in correctly or has expired.
    2. Room entry and exit related
    If failed to enter the room, you should first verify the correctness of the room entry parameters. It is essential that the room entry and exit APIs are called in a paired manner. This means that, even in the event of a failed room entry, the room exit API must still be called.
    Enumeration
    Value
    Description
    ERR_TRTC_CONNECT_SERVER_TIMEOUT
    -3308
    Room entry request timed out. Check if your internet connection is lost or if a VPN is enabled. You may also attempt to switch to 4G for testing.
    ERR_TRTC_INVALID_SDK_APPID
    -3317
    Room entry parameter sdkAppId is incorrect. Check if TRTCParams.sdkAppId is empty.
    ERR_TRTC_INVALID_ROOM_ID
    -3318
    Room entry parameter roomId is incorrect. Check if TRTCParams.roomId or TRTCParams.strRoomId is empty. Note that roomId and strRoomId cannot be used interchangeably.
    ERR_TRTC_INVALID_USER_ID
    -3319
    Room entry parameter userId is incorrect. Check if TRTCParams.userId is empty.
    ERR_TRTC_ENTER_ROOM_REFUSED
    -3340
    Room entry request is denied. Check if enterRoom is called consecutively to enter rooms with the same ID.
    3. Device related
    Errors for relevant monitoring devices. Prompt the user via UI in case of relevant errors.
    Enumeration
    Value
    Description
    ERR_MIC_START_FAIL
    -1302
    Failed to open the mic. For example, if there is an exception for the mic's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_SPEAKER_START_FAIL
    -1321
    Failed to open the speaker. For example, if there is an exception for the speaker's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_MIC_OCCUPY
    -1319
    The mic is occupied. This occurs when, for example, the user is currently having a call on the mobile device.

    Issues with IEMs

    1. How to enable IEMs feature and set the volume?
    // Enable IEMs.
    [[self.trtcCloud getAudioEffectManager] enableVoiceEarMonitor:YES];
    // Set the volume of IEMs.
    [[self.trtcCloud getAudioEffectManager] setVoiceEarMonitorVolume:volume];
    Note:
    The IEMs can be set in advance without having to monitor audio routing changes. Once headphones are connected, the IEMs feature will automatically take effect.
    2. The IEMs feature does not take effect after enabled.
    Due to the high hardware delay of Bluetooth headphones, it is recommended to prompt the anchor to wear wired headphones on the user interface. Also, it should be noted that not all smartphones will achieve excellent IEMs effect after this feature is enabled. TRTC SDK has already blocked this feature on some smartphones with poor effect.
    3. High IEM delay
    Check if Bluetooth headphones are in use. Due to the high hardware delay of Bluetooth headphones, wired headphones are recommended. Additionally, you can try improving the issue of high IEM delay by enabling hardware IEM through the experimental API setSystemAudioKitEnabled. Hardware IEMs have better performance and lower delay. Software IEMs have higher delay but better compatibility. Currently, for Huawei and VIVO devices, SDK defaults to hardware IEMs. Other devices default to software IEMs. If there are compatibility issues with hardware IEMs, contact us to configure forced use of software IEMs.

    Issues with NTP sync

    1. NTP time sync finished, but result maybe inaccurate.
    NTP sync is successful, but the deviation may still be more than 30 milliseconds. This indicates a poor client network environment with persistent RTT jitter.
    2. Error in AddressResolver: No address associated with hostname
    NTP sync has failed, possibly due to a temporary exception in local ISP DNS resolution under the current network environment. Try again later.
    3. NTP service retry processing logic.
    
    
    

    Issues with real-time chorus usage

    1. Why does the lead singer in real-time chorus scenarios need to use dual-instance streaming?
    In real-time chorus scenarios, to minimize end-to-end delay and achieve sync between vocals and accompaniment, a common approach is to use dual instances at the lead singer's end to separately upload vocal and accompaniment streams, while other chorus participants only upload their vocal streams and locally play the accompaniment. In this case, each chorus participant needs to subscribe to the lead singer's vocal stream, while refraining from subscribing to the lead singer's music stream. This setup can only be achieved by implementing dual-instance separate streaming.
    2. Why is it recommended to enable mixing pushback in real-time chorus scenarios?
    Having the audience pull multiple single streams at the same time is likely to result in misalignment between multiple vocal streams and accompaniment streams. Pulling a mixed stream can ensure absolute alignment of all streams and reduce downstream bandwidth.
    3. What are the uses of SEI in real-time chorus scenarios?
    Transmitting accompaniment music progress, for lyric sync on the audience's end.
    Transparently transmitting single stream volume through a mixed stream, for display as sound waves on the listener's end.
    4. Loading accompaniment music takes a long duration, causing significant playback delay?
    Loading network music resources via the SDK incurs a certain delay. It is recommended to initiate music pre-loading before starting playback.
    [[self.trtcCloud getAudioEffectManager] preloadMusic:musicParam onProgress:nil onError:nil];
    5. When singing along with accompaniment, the vocals are barely audible. Is the music overwhelming the vocals?
    If the default volume settings result in the accompaniment overwhelming the vocals, it is recommended to adjust the volume balance between the music and vocals accordingly.
    // Set the local playback volume of a piece of background music.
    [[self.trtcCloud getAudioEffectManager] setMusicPlayoutVolume:self.musicId volume:volume];
    // Set the remote playback volume of a specific background music.
    [[self.trtcCloud getAudioEffectManager] setMusicPublishVolume:self.musicId volume:volume];
    // Set the local and remote volume of all background music.
    [[self.trtcCloud getAudioEffectManager] setAllMusicVolume:volume];
    // Set the volume of voice capture.
    [[self.trtcCloud getAudioEffectManager] setVoiceVolume:volume];
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support