tencent cloud

Feedback

Last updated: 2024-07-18 14:26:14

    Business Process

    This section summarizes some common business processes in online karaoke, helping you better understand the implementation process of the entire scenario.
    Song request process
    Solo singing process
    Lead singer process
    Chorus process
    Audience process
    The following figure shows the process of requesting songs from a music repository on the business side and playing them using the TRTC SDK.
    
    
    
    The following figure shows the process of a solo singing turn-taking game, that is, the performer enters a room to perform, stops performing, and exits the room.
    
    
    
    The following figure shows the process of a real-time chorus game, that is, the lead singer initiates a chorus, stops the chorus, and exits the room.
    
    The following figure shows the process of a real-time chorus game, that is, the chorus members join the chorus, stop the chorus, and exit the room.
    
    
    
    The following figure shows the process of an online karaoke scenario, that is, the audience enters the room to listen to songs and synchronizes lyrics.
    
    
    

    Integration Preparation

    Step 1. Activating the service.

    The online karaoke scenarios usually require two paid PaaS services from Tencent Cloud: Tencent Real-Time Communication (TRTC) and Intelligent Music Solution for construction. TRTC is responsible for providing real-time audio and video interaction capabilities. Intelligent Music Solution is responsible for providing lyric recognition, smart composition, music recognition, and music scoring capabilities.
    Activate TRTC service.
    Activate the Intelligent Music service.
    1. First, you need to log in to the Tencent Real-Time Communication (TRTC) console to create an application. You can choose to upgrade the TRTC application version according to your needs. For example, the professional edition unlocks more value-added feature services.
    
    
    
    Note:
    It is recommended to create two applications for testing and production environments, respectively. Each Tencent Cloud account (UIN) is given 10,000 minutes of free duration every month for one year.
    TRTC offers monthly subscription plans including the experience edition (default), basic edition, and professional edition. Different value-added feature services can be unlocked. For details, see Version Features and Monthly Subscription Plan Instructions.
    2. After an application is created, you can see the basic information of the application in the Application Management - Application Overview section. It is important to keep the SDKAppID and SDKSecretKey safe for later use and to avoid key leakage that could lead to traffic theft.
    
    
    

    Preparation

    1. Go to the Purchase Page to activate the music service, and choose the appropriate features such as music scoring to activate.
    2. Create an AK/SK Key Pair in CAM (namely, a programmable access user that does not require log-in or any user permissions).
    3. Create a COS Bucket, and in the COS Bucket Management interface, authorize the read and write permissions of the COS Bucket to the created programmable access user.
    4. Prepare the parameters.
    operateUin: Tencent Cloud sub-user's account ID.
    cosConfig: COS related parameters.
    secretId: Bucket's secretId.
    secretKey: Bucket's secretKey.
    bucket: Bucket's name.
    region: Bucket's region, for example, ap-guangzhou.

    Activation and registration.

    After the preparation is completed, proceed with registration activation by initiating a request, with an estimated wait time of about 2 minutes.
    Initiate request.
    Request result:
    curl -X POST \\
    http://service-mqk0mc83-1257411467.bj.apigw.tencentcs.com/release/register \\
    -H 'Content-Type: application/json' \\
    -H 'Cache-control: no-cache' \\
    -d '{
    "requestId": "test-regisiter-service",
    "action": "Register",
    "registerRequest": {
    "operateUin": <operateUin>,
    "userName": <customedName>,
    "cosConfig": {
    "secretId": <CosConfig.secretId>,
    "secretKey": <CosConfig.secretKey>,
    "bucket": <CosConfig.bucket>,
    "region": <CosConfig.region>
    }
    }
    }'
    {
    "requestId": "test-regisiter-service",
    "registerInfo": {
    "tmpContentId": <tmpContentId>,
    "tmpSecretId": <tmpSecretId>,
    "tmpSecretKey": <tmpSecretKey>,
    "apiGateSecretId": <apiGateSecretId>,
    "apiGateSecretKey": <apiGateSecretKey>,
    "demoCosPath": "UIN_demo/run_musicBeat.py",
    "usageDescription": "Download the python version demo file [UIN_demo/run_musicBeat.py] from the COS bucket [CosConfig.bucket], replace the input file in the demo, and then execute python run_musicBeat.py",
    "message": "Registration successful, and thank you for registering.",
    "createdAt": <createdAt>,
    "updatedAt": <updatedAt>
    }
    }

    Run verification.

    After the above activation and registration service are completed, a python version executable demo example based on music beat recognition capability will be generated in the demoCosPath directory. Execute the command python run_musicBeat.py in a networked environment for verification.
    Note:
    For more detailed intelligent music solution integration instructions, see Integration Guide.

    Step 2: Importing SDK.

    The TRTC SDK has been released to the mavenCentral repository, and you can configure Gradle to download and update automatically.
    1. Add the dependency for the appropriate version of the SDK in dependencies.
    dependencies {
    // TRTC Lite SDK. It includes TRTC and live streaming playback features and is compact in size.
    implementation 'com.tencent.liteav:LiteAVSDK_TRTC:latest.release'
    // TRTC Professional SDK. It also includes live streaming, short video, video on demand, and other features, and is slightly larger in size.
    // implementation 'com.tencent.liteav:LiteAVSDK_Professional:latest.release'
    }
    Note:
    Besides the recommended automatic loading method, you can also choose to download the SDK and manually import it. For details, see Manually Integrating the TRTC SDK.
    2. Specify the CPU architecture used by the app in defaultConfig.
    defaultConfig {
    ndk {
    abiFilters "armeabi-v7a", "arm64-v8a"
    }
    }
    Note:
    The TRTC SDK supports architectures including armeabi, armeabi-v7a and arm64-v8a. Additionally, it supports architectures for simulators including x86 and x86_64.

    Step 3: Project configuration.

    1. Configure permissions.
    To configure app permissions in AndroidManifest.xml, for karaoke scenarios, the TRTC SDK requires the following permissions:
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.BLUETOOTH" />
    Note:
    The TRTC SDK does not have built-in permission request logic. You need to declare the corresponding permissions and features yourself. Some permissions (such as storage and recording), also require runtime dynamic requests.
    If the Android project's targetSdkVersion is 31 or higher, or if the target device runs Android 12 or a newer version, the official requirement is to dynamically request android.permission.BLUETOOTH_CONNECT permission in the code to use the Bluetooth feature properly. For more information, see Bluetooth Permissions.
    2. Obfuscation configuration.
    Since we use Java's reflection features inside the SDK, you need to add relevant SDK classes to the non-obfuscation list in the proguard-rules.pro file:
    -keep class com.tencent.** { *; }

    Step 4: Authentication and authorization.

    UserSig is a security protection signature designed by Tencent Cloud to prevent malicious attackers from misappropriating your cloud service usage rights. TRTC validates this authentication credential when it enters the room.
    Debugging Stage: UserSig can be generated through two methods for debugging and testing purposes only: client sample code and console access.
    Formal Operation Stage: It is recommended to use a higher security level server computation for generating UserSig. This is to prevent key leakage due to client reverse engineering.
    The specific implementation process is as follows:
    1. Before calling the SDK's initialization function, your app must first request UserSig from your server.
    2. Your server computes the UserSig based on the SDKAppID and UserID.
    3. The server returns the computed UserSig to your app.
    4. Your app passes the obtained UserSig into the SDK through a specific API.
    5. The SDK submits the SDKAppID + UserID + UserSig to Tencent Cloud CVM for verification.
    6. Tencent Cloud verifies the UserSig and confirms its validity.
    7. After the verification is passed, real-time audio and video services will be provided to the TRTC SDK.
    
    
    
    Note:
    The local computation method of UserSig during the debugging stage is not recommended for application in an online environment. It is prone to reverse engineering, leading to key leakage.
    We provide server computation source code for UserSig in multiple programming languages (Java/GO/PHP/Nodejs/Python/C#/C++). For details, see Server Computation of UserSig.

    Step 5: Initializing the SDK.

    // Create TRTC SDK instance (Single Instance Pattern).
    TRTCCloud mTRTCCloud = TRTCCloud.sharedInstance(context);
    // Set event listeners.
    mTRTCCloud.addListener(trtcSdkListener);
    
    // Notifications from various SDK events (e.g., error codes, warning codes, audio and video status parameters, etc.).
    private TRTCCloudListener trtcSdkListener = new TRTCCloudListener() {
    @Override
    public void onError(int errCode, String errMsg, Bundle extraInfo) {
    Log.d(TAG, errCode + errMsg);
    }
    @Override
    public void onWarning(int warningCode, String warningMsg, Bundle extraInfo) {
    Log.d(TAG, warningCode + warningMsg);
    }
    };
    
    // Remove event listener.
    mTRTCCloud.removeListener(trtcSdkListener);
    // Terminate TRTC SDK instance (Singleton Pattern).
    TRTCCloud.destroySharedInstance();
    Note:
    It is recommended to listen to SDK event notifications. Perform log printing and handling for some common errors. For details, see Error Code Table.

    Scenario 1: Solo singing turn-taking

    Perspective 1: Performer actions

    Sequence diagram

    
    
    
    1. Enter the room.
    public void enterRoom(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCCloudDef.TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    Note:
    To better transmit SEI messages for lyrics synchronization, it is recommended to choose TRTC_APP_SCENE_LIVE for room-entry scenarios.
    // Event callback for the result of entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    Log.d(TAG, "Enter room succeed");
    // Enable the experimental API for black frame insertion.
    mTRTCCloud.callExperimentalAPI("{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":true}}");
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    Under the pure audio mode, the performer needs to enable the insertion of black frames to carry SEI messages. This API should be called after successfully entering the room.
    2. Go live on streams.
    // Switched to the anchor role.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);
    
    // Event callback for switching the role.
    @Override
    public void onSwitchRole(int errCode, String errMsg) {
    if (errCode == TXLiteAVCode.ERR_NULL) {
    // Set media volume type.
    mTRTCCloud.setSystemVolumeType(TRTCCloudDef.TRTCSystemVolumeTypeMedia);
    // Upstream local audio streams and set audio quality.
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_MUSIC);
    }
    }
    Note:
    In karaoke scenarios, it is recommended to set the full-range media volume and music quality to achieve a high-fidelity listening experience.
    3. Song selection and performance.
    Search for songs, and obtain music resources.
    Search for songs and acquire music resources through the business backend. Obtain identifiers such as the MusicId, the song's URL (MusicUrl), and the lyrics URL (LyricsUrl).
    It is recommended that the business side select an appropriate music repository production to provide licensed music resources.
    Play accompaniment and start singing.
    // Obtain audio effects management.
    TXAudioEffectManager mTXAudioEffectManager = mTRTCCloud.getAudioEffectManager();
    
    // originMusicId: Custom identifier for the original vocal music. originMusicUrl: URL of the original vocal music resource.
    TXAudioEffectManager.AudioMusicParam originMusicParam = new TXAudioEffectManager.AudioMusicParam(originMusicId, originMusicUrl);
    // Whether to publish the original vocal music to remote (otherwise play locally only).
    originMusicParam.publish = true;
    
    // accompMusicId: Custom identifier for the accompaniment music. accompMusicUrl: URL of the accompaniment music resource.
    TXAudioEffectManager.AudioMusicParam accompMusicParam = new TXAudioEffectManager.AudioMusicParam(accompMusicId, accompMusicUrl);
    // Whether to publish the accompaniment to remote (otherwise play locally only).
    accompMusicParam.publish = true;
    
    // Start playing the original vocal music.
    mTXAudioEffectManager.startPlayMusic(originMusicParam);
    // Start playing the accompaniment music.
    mTXAudioEffectManager.startPlayMusic(accompMusicParam);
    
    // Switch to the original vocal music.
    mTXAudioEffectManager.setMusicPlayoutVolume(originMusicId, 100);
    mTXAudioEffectManager.setMusicPlayoutVolume(accompMusicId, 0);
    mTXAudioEffectManager.setMusicPublishVolume(originMusicId, 100);
    mTXAudioEffectManager.setMusicPublishVolume(accompMusicId, 0);
    
    // Switch to the accompaniment music.
    mTXAudioEffectManager.setMusicPlayoutVolume(originMusicId, 0);
    mTXAudioEffectManager.setMusicPlayoutVolume(accompMusicId, 100);
    mTXAudioEffectManager.setMusicPublishVolume(originMusicId, 0);
    mTXAudioEffectManager.setMusicPublishVolume(accompMusicId, 100);
    Note:
    In karaoke scenarios, both the original vocal and accompaniment need to be played simultaneously (distinguished by MusicID). The switch between the original vocal and accompaniment is achieved by adjusting the local and remote playback volumes.
    If the music being played has dual audio tracks (including both the original vocal and accompaniment), switching between them can be achieved by specifying the music's playback track using setMusicTrack.
    4. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Synchronize local lyrics, and transmit song progress via SEI.
    mTXAudioEffectManager.setMusicObserver(musicId, new TXAudioEffectManager.TXMusicPlayObserver() {
    @Override
    public void onStart(int id, int errCode) {
    // Start playing music.
    }
    @Override
    public void onPlayProgress(int id, long curPtsMs, long durationMs) {
    // Determine whether seek is needed based on the latest progress and the local lyrics progress deviation.
    // Song progress is transmitted by sending an SEI message.
    try {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("musicId", id);
    jsonObject.put("progress", curPtsMs);
    jsonObject.put("duration", durationMs);
    mTRTCCloud.sendSEIMsg(jsonObject.toString().getBytes(), 1);
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    @Override
    public void onComplete(int id, int errCode) {
    // Music playback completed.
    }
    });
    Note:
    Ensure to set the playback event callback using this API before playing the background music. This allows to be aware of the background music's playback progress.
    The frequency of the SEI messages sent by the performer is determined by the event callback frequency. Also, the playback progress can be actively synchronized on a schedule through getMusicCurrentPosInMS.
    5. Become a listener and exit the room.
    // Switched to the audience role.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    
    // Event callback for switching the role.
    @Override
    public void onSwitchRole(int errCode, String errMsg) {
    if (errCode == TXLiteAVCode.ERR_NULL) {
    // Stop playing accompaniment music.
    mTRTCCloud.getAudioEffectManager().stopPlayMusic(musicId);
    // Stop local audio capture and publishing.
    mTRTCCloud.stopLocalAudio();
    }
    }
    
    // Exit the room.
    mTRTCCloud.exitRoom();
    
    // Exit room event callback.
    @Override
    public void onExitRoom(int reason) {
    if (reason == 0) {
    Log.d(TAG, "Actively call exitRoom to exit the room.");
    } else if (reason == 1) {
    Log.d(TAG, "Removed from the current room by the server.");
    } else if (reason == 2) {
    Log.d(TAG, "The current room has been dissolved.");
    }
    }
    Note:
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.
    If you want to call enterRoom again or switch to another audio and video SDK, wait for the onExitRoom callback before proceeding. Otherwise, you may encounter various exceptional issues such as the camera, microphone device being forcibly occupied.

    Perspective 2: Listener actions

    Sequence diagram

    
    
    
    1. Enter the room.
    public void enterRoom(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCCloudDef.TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    Log.d(TAG, "Enter room succeed");
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    To better transmit SEI messages for lyrics synchronization, it is recommended to choose TRTC_APP_SCENE_LIVE for room-entry scenarios.
    Under the automatic subscription mode (default), audiences automatically subscribe and play the on-mic anchor's audio and video streams upon entering the room.
    2. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Listener end lyric synchronization
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
    if (available) {
    mTRTCCloud.startRemoteView(userId, null);
    } else {
    mTRTCCloud.stopRemoteView(userId);
    }
    }
    
    @Override
    public void onRecvSEIMsg(String userId, byte[] data) {
    String result = new String(data);
    try {
    JSONObject jsonObject = new JSONObject(result);
    int musicId = jsonObject.getInt("musicId");
    long progress = jsonObject.getLong("progress");
    long duration = jsonObject.getLong("duration");
    } catch (JSONException e) {
    e.printStackTrace();
    }
    ...
    // TODO: The logic of updating the lyric control.
    // Based on the received latest progress and the local lyrics progress deviation, determine whether a lyric control seek is necessary.
    ...
    }
    Note:
    Listeners need to actively subscribe to the performer's video streams in order to receive the SEI messages carried by black frames.
    3. Exit the room.
    // Exit the room.
    mTRTCCloud.exitRoom();
    
    // Exit room event callback.
    @Override
    public void onExitRoom(int reason) {
    if (reason == 0) {
    Log.d(TAG, "Actively call exitRoom to exit the room.");
    } else if (reason == 1) {
    Log.d(TAG, "Removed from the current room by the server.");
    } else if (reason == 2) {
    Log.d(TAG, "The current room has been dissolved.");
    }
    }

    Scenario 2: Real-time chorus

    Perspective 1: Lead singer actions

    Sequence diagram

    
    
    
    1. Dual instances enter the room.
    // Create a TRTCCloud primary instance (vocal instance).
    TRTCCloud mTRTCCloud = TRTCCloud.sharedInstance(context);
    // Create a TRTCCloud sub-instance (music instance).
    TRTCCloud subCloud = mTRTCCloud.createSubCloud();
    
    // The primary instance (vocal instance) enters the room.
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    params.sdkAppId = SDKAppId;
    params.userId = UserId;
    params.userSig = UserSig;
    params.role = TRTCCloudDef.TRTCRoleAnchor;
    params.strRoomId = RoomId;
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    
    // The sub-instance enables manual subscription mode. By default it does not subscribe to remote streams.
    subCloud.setDefaultStreamRecvMode(false, false);
    
    // The sub-instance (music instance) enters the room.
    TRTCCloudDef.TRTCParams bgmParams = new TRTCCloudDef.TRTCParams();
    bgmParams.sdkAppId = SDKAppId;
    // The sub-instance username must not duplicate with other users in the room.
    bgmParams.userId = UserId + "_bgm";
    bgmParams.userSig = UserSig;
    bgmParams.role = TRTCCloudDef.TRTCRoleAnchor;
    bgmParams.strRoomId = RoomId;
    subCloud.enterRoom(bgmParams, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    Note:
    In a real-time chorus solution, the lead singer end must create primary instance and sub-instance for upstream voice and accompaniment music, respectively.
    Sub-instances do not need to subscribe to other users' audio streams in the room. Therefore, it is recommended to enable manual subscription mode, and it must be activated before entering the room.
    2. Set the settings after entering the room.
    // Event callback for the result of primary instance entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The primary instance unsubscribe from music streams published by sub-instances.
    mTRTCCloud.muteRemoteAudio(UserId + "_bgm", true);
    // The primary instance uses the experimental API to enable black frame insertion.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":true}}");
    // The primary instance uses the experimental API to enable chorus mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":0}}");
    // The primary instance uses the experimental API to enable low-latency mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}");
    // The primary instance enables volume level callback.
    mTRTCCloud.enableAudioVolumeEvaluation(300, false);
    // The primary instance sets the global media volume type.
    mTRTCCloud.setSystemVolumeType(TRTCCloudDef.TRTCSystemVolumeTypeMedia);
    // The primary instance captures and publishes local audio, and sets audio quality.
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_MUSIC);
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    
    // Event callback for the result of sub-instance entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The sub-instance uses the experimental API to enable chorus mode.
    subCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":1}}");
    // The sub-instance uses the experimental API to enable low-latency mode.
    subCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}");
    // The sub-instance sets global media volume type.
    subCloud.setSystemVolumeType(TRTCCloudDef.TRTCSystemVolumeTypeMedia);
    // The sub-instance sets audio quality.
    subCloud.setAudioQuality(TRTCCloudDef.TRTC_AUDIO_QUALITY_MUSIC);
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    Both the primary instance and sub-instance must use the experimental APIs to enable chorus mode and low-latency mode to optimize the chorus experience. Note the difference in the audioSource parameter.
    3. Push the mixed stream back to the room.
    private void startPublishMediaToRoom(String roomId, String userId) {
    // Create TRTCPublishTarget object.
    TRTCCloudDef.TRTCPublishTarget target = new TRTCCloudDef.TRTCPublishTarget();
    // After mixing, the stream is relayed back to the room.
    target.mode = TRTCCloudDef.TRTC_PublishMixStream_ToRoom;
    target.mixStreamIdentity.strRoomId = roomId;
    // The mixing stream robot's username must not duplicate with other users in the room.
    target.mixStreamIdentity.userId = userId + "_robot";
    
    // Set the encoding parameters of the transcoded audio stream (can be customized).
    TRTCCloudDef.TRTCStreamEncoderParam trtcStreamEncoderParam = new TRTCCloudDef.TRTCStreamEncoderParam();
    trtcStreamEncoderParam.audioEncodedChannelNum = 2;
    trtcStreamEncoderParam.audioEncodedKbps = 64;
    trtcStreamEncoderParam.audioEncodedCodecType = 2;
    trtcStreamEncoderParam.audioEncodedSampleRate = 48000;
    
    // Set the encoding parameters of the transcoded video stream (black frame mixing required).
    trtcStreamEncoderParam.videoEncodedFPS = 15;
    trtcStreamEncoderParam.videoEncodedGOP = 3;
    trtcStreamEncoderParam.videoEncodedKbps = 30;
    trtcStreamEncoderParam.videoEncodedWidth = 64;
    trtcStreamEncoderParam.videoEncodedHeight = 64;
    
    // Set audio mixing parameters.
    TRTCCloudDef.TRTCStreamMixingConfig trtcStreamMixingConfig = new TRTCCloudDef.TRTCStreamMixingConfig();
    // By default, leave this field empty. It indicates that all audio in the room will be mixed.
    trtcStreamMixingConfig.audioMixUserList = null;
    
    // Configure video mixed-stream template (black frame mixing required).
    TRTCCloudDef.TRTCVideoLayout videoLayout = new TRTCCloudDef.TRTCVideoLayout();
    trtcStreamMixingConfig.videoLayoutList.add(videoLayout);
    
    // Start mixing and pushing back.
    mTRTCCloud.startPublishMediaStream(target, trtcStreamEncoderParam, trtcStreamMixingConfig);
    }
    Note:
    To maintain alignment between chorus vocals and accompaniment music, it is recommended to enable pushing the mixed stream back to the room. The on-mic chorus members mutually subscribe to single streams, and off-mic audiences by default only subscribe to mixed streams.
    The mixing stream robot, acting as an independent user, enters the room to pull, mix, and push streams. Its username must not duplicate with other usernames in the room. Otherwise, it may lead to mutual deletion from the room.
    4. Search for and request songs.
    Search for songs and acquire music resources through the business backend. Obtain identifiers such as the MusicId, the song's URL (MusicUrl), and the lyrics URL (LyricsUrl).
    It is recommended that the business side select an appropriate music repository production to provide licensed music resources.
    5. NTP synchronization.
    TXLiveBase.setListener(new TXLiveBaseListener() {
    @Override
    public void onUpdateNetworkTime(int errCode, String errMsg) {
    super.onUpdateNetworkTime(errCode, errMsg);
    // errCode 0: Time synchronization successful and deviation within 30 ms. 1: Time synchronization successful but deviation possibly above 30 ms. -1: Time synchronization failed.
    if (errCode == 0) {
    // Time synchronization successful and NTP timestamp obtained.
    long ntpTime = TXLiveBase.getNetworkTimestamp();
    } else {
    // If time synchronization fails, an attempt to resynchronize can be made.
    TXLiveBase.updateNetworkTime();
    }
    }
    });
    
    TXLiveBase.updateNetworkTime();
    Note:
    NTP time synchronization results can reflect the current network quality of the application user. To ensure a good chorus experience, it is recommended not to allow users to initiate chorus if time synchronization fails.
    6. Send chorus signaling.
    Timer mTimer = new Timer();
    mTimer.schedule(new TimerTask() {
    @Override
    public void run() {
    try {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("cmd", "startChorus");
    // Agreed chorus start time: Current NTP time + delayed playback time (for example, 3 seconds).
    jsonObject.put("startPlayMusicTS", TXLiveBase.getNetworkTimestamp() + 3000);
    jsonObject.put("musicId", musicId);
    jsonObject.put("musicDuration", subCloud.getAudioEffectManager().getMusicDurationInMS(originMusicUri));
    mTRTCCloud.sendCustomCmdMsg(1, jsonObject.toString().getBytes(), false, false);
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    }, 0, 1000);
    Note:
    The lead singer needs to cyclically broadcast chorus signaling to the room at a fixed time interval (e.g., every 1 second), so that new users who join mid-session can also participate in the chorus.
    7. Load and play accompaniment.
    // Obtain audio effects management.
    TXAudioEffectManager mTXAudioEffectManager = subCloud.getAudioEffectManager();
    
    // originMusicId: Custom identifier for the original vocal music. originMusicUrl: URL of the original vocal music resource.
    TXAudioEffectManager.AudioMusicParam originMusicParam = new TXAudioEffectManager.AudioMusicParam(originMusicId, originMusicUrl);
    // Publish original music to the remote.
    originMusicParam.publish = true;
    // Music start playing time point (in milliseconds).
    originMusicParam.startTimeMS = 0;
    
    // accompMusicId: Custom identifier for the accompaniment music. accompMusicUrl: URL of the accompaniment music resource.
    TXAudioEffectManager.AudioMusicParam accompMusicParam = new TXAudioEffectManager.AudioMusicParam(accompMusicId, accompMusicUrl);
    // Publish accompaniment music to the remote.
    accompMusicParam.publish = true;
    // Music start playing time point (in milliseconds).
    accompMusicParam.startTimeMS = 0;
    
    // Preload the original vocal music.
    mTXAudioEffectManager.preloadMusic(originMusicParam);
    // Preload the accompaniment music.
    mTXAudioEffectManager.preloadMusic(accompMusicParam);
    
    // Start playing the original vocal music after a delayed playback time (for example, 3 seconds).
    mTXAudioEffectManager.startPlayMusic(originMusicParam);
    // Start playing the accompaniment music after a delayed playback time (for example, 3 seconds).
    mTXAudioEffectManager.startPlayMusic(accompMusicParam);
    
    // Switch to the original vocal music.
    mTXAudioEffectManager.setMusicPlayoutVolume(originMusicId, 100);
    mTXAudioEffectManager.setMusicPlayoutVolume(accompMusicId, 0);
    mTXAudioEffectManager.setMusicPublishVolume(originMusicId, 100);
    mTXAudioEffectManager.setMusicPublishVolume(accompMusicId, 0);
    
    // Switch to the accompaniment music.
    mTXAudioEffectManager.setMusicPlayoutVolume(originMusicId, 0);
    mTXAudioEffectManager.setMusicPlayoutVolume(accompMusicId, 100);
    mTXAudioEffectManager.setMusicPublishVolume(originMusicId, 0);
    mTXAudioEffectManager.setMusicPublishVolume(accompMusicId, 100);
    Note:
    It is recommended to preload music before starting playback. By loading music resources into memory in advance, you can effectively reduce the load delay of music playback.
    In karaoke scenarios, both the original vocal and accompaniment need to be played simultaneously (distinguished by MusicID). The switch between the original vocal and accompaniment is achieved by adjusting the local and remote playback volumes.
    If the music being played has dual audio tracks (including both the original vocal and accompaniment), switching between them can be achieved by specifying the music's playback track using setMusicTrack.
    8. Accompaniment Synchronization
    // Agreed chorus start time.
    long mStartPlayMusicTs = jsonObject.getLong("startPlayMusicTS");
    // Actual playback progress of the current accompaniment music.
    long currentProgress = subCloud.getAudioEffectManager().getMusicCurrentPosInMS(musicId);
    // Ideal playback progress of the current accompaniment music.
    long estimatedProgress = TXLiveBase.getNetworkTimestamp() - mStartPlayMusicTs;
    // When the progress difference exceeds 50 ms, corrections are made.
    if (estimatedProgress >= 0 && Math.abs(currentProgress - estimatedProgress) > 50) {
    subCloud.getAudioEffectManager().seekMusicToPosInMS(musicId, (int) estimatedProgress);
    }
    9. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Synchronize local lyrics, and transmit song progress via SEI.
    mTXAudioEffectManager.setMusicObserver(musicId, new TXAudioEffectManager.TXMusicPlayObserver() {
    @Override
    public void onStart(int id, int errCode) {
    // Start playing music.
    }
    @Override
    public void onPlayProgress(int id, long curPtsMs, long durationMs) {
    // Determine whether seek is needed based on the latest progress and the local lyrics progress deviation.
    // Song progress is transmitted by sending an SEI message.
    try {
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("musicId", id);
    jsonObject.put("progress", curPtsMs);
    jsonObject.put("duration", durationMs);
    mTRTCCloud.sendSEIMsg(jsonObject.toString().getBytes(), 1);
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    @Override
    public void onComplete(int id, int errCode) {
    // Music playback completed.
    }
    });
    Note:
    Ensure to set the playback event callback using this API before playing the background music. This allows to be aware of the background music's playback progress.
    The frequency of the SEI messages sent by the performer is determined by the event callback frequency. Also, the playback progress can be actively synchronized on a schedule through getMusicCurrentPosInMS.
    10. Become a listener and exit the room.
    // The sub-instance uses the experimental API to disable chorus mode.
    subCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":1}}");
    // The sub-instance uses the experimental API to disable low-latency mode.
    subCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}");
    // The sub-instance switches to the audience role.
    subCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    // The sub-instance stops playing accompaniment music.
    subCloud.getAudioEffectManager().stopPlayMusic(musicId);
    // The sub-instance exits the room.
    subCloud.exitRoom();
    
    // The primary instance uses the experimental API to disable black frame insertion.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableBlackStream\\",\\"params\\": {\\"enable\\":false}}");
    // The primary instance uses the experimental API to disable chorus mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":0}}");
    // The primary instance uses the experimental API to disable low-latency mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}");
    // The primary instance switches to the audience role.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    // The primary instance stops local audio capture and publishing.
    mTRTCCloud.stopLocalAudio();
    // The primary instance exits the room.
    mTRTCCloud.exitRoom();

    Perspective 2: Chorus actions

    Sequence diagram

    
    
    
    1. Enter the room.
    public void enterRoom(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // Example of entering the room as an audience role.
    params.role = TRTCCloudDef.TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    Log.d(TAG, "Enter room succeed");
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    2. Go live on streams.
    // Switched to the anchor role.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);
    
    // Event callback for switching the role.
    @Override
    public void onSwitchRole(int errCode, String errMsg) {
    if (errCode == TXLiteAVCode.ERR_NULL) {
    // Cancel subscription to music streams published by the lead singer sub-instance.
    mTRTCCloud.muteRemoteAudio(mBgmUserId, true);
    // Use the experimental API to enable chorus mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":true,\\"audioSource\\":0}}");
    // Use the experimental API to enable low-latency mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":true}}");
    // Set media volume type.
    mTRTCCloud.setSystemVolumeType(TRTCCloudDef.TRTCSystemVolumeTypeMedia);
    // Upstream local audio streams and set audio quality.
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_MUSIC);
    }
    }
    Note:
    To minimize delay, all chorus members play the accompaniment music locally. Therefore, it is necessary to cancel subscriptions to music streams published by the lead singer.
    Chorus members also need to use the experimental API to enable chorus mode and low-latency mode to optimize the chorus experience.
    In karaoke scenarios, it is recommended to set the full-range media volume and music quality to achieve a high-fidelity listening experience.
    3. NTP synchronization.
    TXLiveBase.setListener(new TXLiveBaseListener() {
    @Override
    public void onUpdateNetworkTime(int errCode, String errMsg) {
    super.onUpdateNetworkTime(errCode, errMsg);
    // errCode 0: Time synchronization successful and deviation within 30 ms. 1: Time synchronization successful but deviation possibly above 30 ms. -1: Time synchronization failed.
    if (errCode == 0) {
    // Time synchronization successful and NTP timestamp obtained.
    long ntpTime = TXLiveBase.getNetworkTimestamp();
    } else {
    // If time synchronization fails, an attempt to resynchronize can be made.
    TXLiveBase.updateNetworkTime();
    }
    }
    });
    
    TXLiveBase.updateNetworkTime();
    Note:
    NTP time synchronization results can reflect the current network quality of the application user. To ensure a good chorus experience, it is recommended not to allow users to participate in the chorus if time synchronization fails.
    4. Receive chorus signaling.
    @Override
    public void onRecvCustomCmdMsg(String userId, int cmdID, int seq, byte[] message) {
    try {
    JSONObject json = new JSONObject(new String(message, "UTF-8"));
    // Match the chorus signaling.
    if (json.getString("cmd").equals("startChorus")) {
    long startPlayMusicTs = json.getLong("startPlayMusicTS");
    int musicId = json.getInt("musicId");
    long musicDuration = json.getLong("musicDuration");
    // Agree on the time difference between chorus time and current time.
    long delayMs = startPlayMusicTs - TXLiveBase.getNetworkTimestamp();
    }
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    Note:
    Once the chorus members receive the chorus signaling and join in, the status should be changed to Chorus In Progress. Chorus signaling would not be responded to again before the end of this chorus round.
    5. Play accompaniment, and start the chorus.
    if (delayMs > 0) { // The chorus has not started.
    // Begin to preload music.
    preloadMusic(musicId, 0L);
    // Play music after a delay of delayMs.
    startPlayMusic(musicId, 0L);
    } else if (Math.abs(delayMs) < musicDuration) { // The chorus is in progress.
    // Play start time: Absolute value of the time difference + preload delay (e.g., 400 ms).
    long startTimeMS = Math.abs(delayMs) + 400;
    // Begin to preload music.
    preloadMusic(musicId, startTimeMS);
    // Start playing music after a preload delay (e.g., 400 ms).
    startPlayMusic(musicId, startTimeMS);
    } else { // The chorus has ended.
    // Joining the chorus is not allowed.
    }
    
    // Preload music.
    public void preloadMusic(int musicId, long startTimeMS) {
    // musicId: Obtained from chorus signaling. musicUrl: Corresponding music resource URL.
    TXAudioEffectManager.AudioMusicParam musicParam = new
    TXAudioEffectManager.AudioMusicParam(musicId, musicUrl);
    // Only local music playback.
    musicParam.publish = false;
    // Music start playing time point (in milliseconds).
    musicParam.startTimeMS = startTimeMS;
    mTRTCCloud.getAudioEffectManager().preloadMusic(musicParam);
    }
    
    // Begin to play music.
    public void startPlayMusic(int musicId, long startTimeMS) {
    // musicId: Obtained from chorus signaling. musicUrl: Corresponding music resource URL.
    TXAudioEffectManager.AudioMusicParam musicParam = new
    TXAudioEffectManager.AudioMusicParam(musicId, musicUrl);
    // Only local music playback.
    musicParam.publish = false;
    // Music start playing time point (in milliseconds).
    musicParam.startTimeMS = startTimeMS;
    mTRTCCloud.getAudioEffectManager().startPlayMusic(musicParam);
    }
    Note:
    To minimize transmission delay as much as possible, chorus members perform along with the local playback of accompaniment music, and they do not need to publish or receive remote music.
    Based on delayMs, the current chorus status can be determined. Developers must implement the startPlayMusic delayed call for different statuses on their own.
    6. Accompaniment Synchronization
    // Agreed chorus start time.
    long mStartPlayMusicTs = jsonObject.getLong("startPlayMusicTS");
    // Actual playback progress of the current accompaniment music.
    long currentProgress = mTRTCCloud.getAudioEffectManager().getMusicCurrentPosInMS(musicId);
    // Ideal playback progress of the current accompaniment music.
    long estimatedProgress = TXLiveBase.getNetworkTimestamp() - mStartPlayMusicTs;
    // When the progress difference exceeds 50 ms, corrections are made.
    if (estimatedProgress >= 0 && Math.abs(currentProgress - estimatedProgress) > 50) {
    mTRTCCloud.getAudioEffectManager().seekMusicToPosInMS(musicId, (int) estimatedProgress);
    }
    7. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Local lyric synchronization.
    mTXAudioEffectManager.setMusicObserver(musicId, new TXAudioEffectManager.TXMusicPlayObserver() {
    @Override
    public void onStart(int id, int errCode) {
    // Start playing music.
    }
    @Override
    public void onPlayProgress(int id, long curPtsMs, long durationMs) {
    // TODO: The logic of updating the lyric control.
    // Determine whether seek in the lyrics control is needed based on the latest progress and the local lyrics progress deviation.
    
    }
    @Override
    public void onComplete(int id, int errCode) {
    // Music playback completed.
    }
    });
    Note:
    Ensure to set the playback event callback using this API before playing the background music. This allows to be aware of the background music's playback progress.
    8. Become a listener and exit the room.
    // Use the experimental API to disable chorus mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"enableChorus\\",\\"params\\":{\\"enable\\":false,\\"audioSource\\":0}}");
    // Use the experimental API to disable low-latency mode.
    mTRTCCloud.callExperimentalAPI(
    "{\\"api\\":\\"setLowLatencyModeEnabled\\",\\"params\\":{\\"enable\\":false}}");
    // Switched to the audience role.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    // Stop playing accompaniment music.
    mTRTCCloud.getAudioEffectManager().stopPlayMusic(musicId);
    // Stop local audio capture and publishing.
    mTRTCCloud.stopLocalAudio();
    // Exit the room.
    mTRTCCloud.exitRoom();

    Perspective 3: Listener actions

    Sequence diagram

    
    
    
    1. Enter the room.
    public void enterRoom(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example.
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend.
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID.
    params.sdkAppId = SDKAppID;
    // It is recommended to enter the room as an audience role.
    params.role = TRTCCloudDef.TRTCRoleAudience;
    // LIVE should be selected for the room entry scenario.
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room.
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // result indicates the time taken (in milliseconds) to join the room.
    Log.d(TAG, "Enter room succeed");
    } else {
    // result indicates the error code when you fail to enter the room.
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    To better transmit SEI messages for lyrics synchronization, it is recommended to choose TRTC_APP_SCENE_LIVE for room-entry scenarios.
    Under the automatic subscription mode (default), audiences automatically subscribe and play the on-mic anchor's audio and video streams upon entering the room.
    2. Lyric synchronization
    Download lyrics.
    Obtain the target lyrics download link, LyricsUrl, from the business backend, and cache the target lyrics locally.
    Listener end lyric synchronization
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
    if (available) {
    mTRTCCloud.startRemoteView(userId, null);
    } else {
    mTRTCCloud.stopRemoteView(userId);
    }
    }
    
    @Override
    public void onRecvSEIMsg(String userId, byte[] data) {
    String result = new String(data);
    try {
    JSONObject jsonObject = new JSONObject(result);
    int musicId = jsonObject.getInt("musicId");
    long progress = jsonObject.getLong("progress");
    long duration = jsonObject.getLong("duration");
    } catch (JSONException e) {
    e.printStackTrace();
    }
    ...
    // TODO: The logic of updating the lyric control.
    // Based on the received latest progress and the local lyrics progress deviation, determine whether a lyric control seek is necessary.
    ...
    }
    Note:
    Listeners need to actively subscribe to the lead singer's video streams in order to receive the SEI messages carried by black frames.
    If the lead singer's mixed stream also mixes in black frames, then only subscribing to the mixing stream robot's video stream is required.
    3. Exit the room.
    // Exit the room.
    mTRTCCloud.exitRoom();
    
    // Exit room event callback.
    @Override
    public void onExitRoom(int reason) {
    if (reason == 0) {
    Log.d(TAG, "Actively call exitRoom to exit the room.");
    } else if (reason == 1) {
    Log.d(TAG, "Removed from the current room by the server.");
    } else if (reason == 2) {
    Log.d(TAG, "The current room has been dissolved.");
    }
    }

    Advanced Features

    Music scoring module integration

    Music scoring provides users with multi-dimensional singing scoring capabilities. Currently, supported scoring dimensions include intonation and rhythm.
    1. Prepare scoring-related files.
    Prepare in advance the performance recording files to be scored, original music standard files, MIDI pitch files, and upload them to COS storage.
    2. Create a music scoring task.
    Request Method: POST(HTTP).
    Request Header: Content-Type: application/json.
    A request sample is as follows:
    Request sample:
    Response sample:
    {
    "action": "CreateJob",
    "secretId": "{secretId}",
    "secretKey": "{secretKey}",
    "createJobRequest": {
    "customId": "{customId}",
    "callback": "{callback}",
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    }
    }
    ]
    }
    }
    {
    "requestId": "ac004192-110b-46e3-ade8-4e449df84d60",
    "createJobResponse": {
    "job": {
    "id": "13f342e4-6866-450e-b44e-3151431c578b",
    "state": 1,
    "customId": "{customId}",
    "callback": "{callback}",
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    }
    }
    ],
    "timing": {
    "createdAt": "1603432763000",
    "startedAt": "0",
    "completedAt": "0"
    }
    }
    }
    }
    3. Obtain music scoring results.
    Obtain Method: Divided into active acquisition and passive callback.
    By querying with the ID obtained from the response packet after creating the task, if the queried task is successful (state=3), the task's Output will carry the smartContentResult structure, in which the vocalScore field stores the result JSON file name. Users can construct the output file's COS path based on the information in Output's COS and destination.
    Request sample:
    Response sample:
    {
    "action": "GetJob",
    "secretId": "{secretId}",
    "secretKey": "{secretKey}",
    "getJobRequest": {
    "id": "{id}"
    }
    }
    {
    "requestId": "c9845a99-34e3-4b0f-80f5-f0a2a0ee8896",
    "getJobResponse": {
    "job": {
    "id": "a95e9d74-6602-4405-a3fc-6408a76bcc98",
    "state": 3,
    "customId": "{customId}",
    "callback": "{callback}",
    "timing": {
    "createdAt": "1610513575000",
    "startedAt": "1610513575000",
    "completedAt": "1610513618000"
    },
    "inputs": [{ "url": "{url}" }],
    "outputs": [
    {
    "contentId": "{contentId}",
    "destination": "{destination}",
    "inputSelectors": [0],
    "smartContentDescriptor": {
    "outputPrefix": "{outputPrefix}",
    "vocalScore": {
    "standardAudio": {
    "midi": {"url":"{url}"},
    "standardWav": {"url":"{url}"},
    "alignWav": {"url":"{url}"}
    }
    }
    },
    "smartContentResult": {
    "vocalScore": "out.json"
    }
    }
    ]
    }
    }
    }
    Passive callbacks need to fill in the callback field when creating a task. The platform will send the entire Job structure to the address specified by the callback after the task reaches the Completed state (COMPLETED/ERROR). It is recommended to obtain task results using passive callbacks. The entire Job structure of tasks that have reached the Completed state (COMPLETED/ERROR) will be sent to the address corresponding to the callback field specified when the task was created. See the active query sample for the Job structure (under getJobResponse).
    Note:
    For more detailed intelligent music solution integration instructions for the music scoring module, see Music Scoring Integration.

    Transparent transmission of single stream volume in mixed streams.

    After the mixed streaming is enabled, the audience cannot directly obtain the on-mic anchor's single stream volume. In order to transparently transmit the single stream volume, the room owner may employ SEI to transmit the callback volume values of all on-mic anchors.
    @Override
    public void onUserVoiceVolume(ArrayList<TRTCCloudDef.TRTCVolumeInfo> userVolumes, int totalVolume) {
    super.onUserVoiceVolume(userVolumes, totalVolume);
    if (userVolumes != null && userVolumes.size() > 0) {
    // For storing volume values corresponding to on-mic users.
    HashMap<String, Integer> volumesMap = new HashMap<>();
    for (TRTCCloudDef.TRTCVolumeInfo user : userVolumes) {
    // Can set an appropriate volume threshold.
    if (user.volume > 10) {
    volumesMap.put(user.userId, user.volume);
    }
    }
    Gson gson = new Gson();
    String body = gson.toJson(volumesMap);
    // Transmit a collection of on-mic users' volume via SEI messages.
    mTRTCCloud.sendSEIMsg(body.getBytes(), 1);
    }
    }
    
    @Override
    public void onRecvSEIMsg(String userId, byte[] data) {
    Gson gson = new Gson();
    HashMap<String, Integer> volumesMap = new HashMap<>();
    try {
    String message = new String(data, "UTF-8");
    volumesMap = gson.fromJson(message, volumesMap.getClass());
    for (String userId : volumesMap.keySet()) {
    // Print the volume levels of single streams of all on-mic users.
    Log.i(userId, String.valueOf(volumesMap.get(userId)));
    }
    } catch (UnsupportedEncodingException e) {
    e.printStackTrace();
    }
    }
    Note:
    The prerequisite for using SEI messages to transparently transmit single stream volume through a mixed stream is that the room owner must either be video streaming or have black frame insertion enabled and furthermore, the audiences must actively subscribe to the room owner's video stream.

    Real-time network quality callback

    You can listen to onNetworkQuality to real-time monitor the network quality of both local and remote users. This callback is thrown every 2 seconds.
    private class TRTCCloudImplListener extends TRTCCloudListener {
    @Override
    public void onNetworkQuality(TRTCCloudDef.TRTCQuality localQuality,
    ArrayList<TRTCCloudDef.TRTCQuality> remoteQuality) {
    // localQuality userId is empty. It represents the local user's network quality evaluation result.
    // remoteQuality represents the remote user's network quality evaluation result. The result is affected by both remote and local factors.
    switch (localQuality.quality) {
    case TRTCCloudDef.TRTC_QUALITY_Excellent:
    Log.i(TAG, "The current network is excellent.");
    break;
    case TRTCCloudDef.TRTC_QUALITY_Good:
    Log.i(TAG, "The current network is good.");
    break;
    case TRTCCloudDef.TRTC_QUALITY_Poor:
    Log.i(TAG, "The current network is moderate.");
    break;
    case TRTCCloudDef.TRTC_QUALITY_Bad:
    Log.i(TAG, "The current network is poor.");
    break;
    case TRTCCloudDef.TRTC_QUALITY_Vbad:
    Log.i(TAG, "The current network is very poor.");
    break;
    case TRTCCloudDef.TRTC_QUALITY_Down:
    Log.i(TAG, "The current network does not meet the minimum requirements of TRTC.");
    break;
    default:
    Log.i(TAG, "Undefined.");
    break;
    }
    }
    }

    Advanced permission control

    TRTC advanced permission control can be used to set different entry permissions for different rooms, such as advanced VIP rooms. It can also be used to control the permission for the audience to speak, such as handling ghost microphones.
    Step 1: Enable the Advanced Permission Control Switch in the TRTC console application's advanced features page.
    
    
    
    Note:
    Once advanced permission control is enabled for a certain SDKAppID, all users using that SDKAppID need to pass in the privateMapKey parameter in TRTCParams to successfully enter the room. Therefore, if you have users online using this SDKAppID, do not enable this feature.
    Step 2: Generate privateMapKey on the backend. For sample code, see privateMapKey computation source code.
    Step 3: Room entry verification & speaking permission verification with PrivateMapKey.
    Room entry verification
    TRTCCloudDef.TRTCParams mTRTCParams = new TRTCCloudDef.TRTCParams();
    mTRTCParams.sdkAppId = SDKAPPID;
    mTRTCParams.userId = mUserId;
    mTRTCParams.strRoomId = mRoomId;
    // UserSig obtained from the business backend.
    mTRTCParams.userSig = getUserSig();
    // PrivateMapKey obtained from the backend.
    mTRTCParams.privateMapKey = getPrivateMapKey();
    mTRTCParams.role = TRTCCloudDef.TRTCRoleAudience;
    mTRTCCloud.enterRoom(mTRTCParams, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    Speaking permission verification
    // Pass in the latest PrivateMapKey obtained from the backend into the role switching API.
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor, getPrivateMapKey());

    Exception Handling

    Exception error handling

    When the TRTC SDK encounters an unrecoverable error, the error will be thrown in the onError callback. For details, see Error Code Table.
    1. UserSig related
    UserSig verification failure will lead to room-entering failure. You can use the UserSig tool for verification.
    Enumeration
    Value
    Description
    ERR_TRTC_INVALID_USER_SIG
    -3320
    Room entry parameter userSig is incorrect. Check if TRTCParams.userSig is empty.
    ERR_TRTC_USER_SIG_CHECK_FAILED
    -100018
    UserSig verification failed. Check if the parameter TRTCParams.userSig is filled in correctly or has expired.
    2. Room entry and exit related
    If failed to enter the room, you should first verify the correctness of the room entry parameters. It is essential that the room entry and exit APIs are called in a paired manner. This means that, even in the event of a failed room entry, the room exit API must still be called.
    Enumeration
    Value
    Description
    ERR_TRTC_CONNECT_SERVER_TIMEOUT
    -3308
    Room entry request timed out. Check if your internet connection is lost or if a VPN is enabled. You may also attempt to switch to 4G for testing.
    ERR_TRTC_INVALID_SDK_APPID
    -3317
    Room entry parameter sdkAppId is incorrect. Check if TRTCParams.sdkAppId is empty.
    ERR_TRTC_INVALID_ROOM_ID
    -3318
    Room entry parameter roomId is incorrect. Check if TRTCParams.roomId or TRTCParams.strRoomId is empty. Note that roomId and strRoomId cannot be used interchangeably.
    ERR_TRTC_INVALID_USER_ID
    -3319
    Room entry parameter userId is incorrect. Check if TRTCParams.userId is empty.
    ERR_TRTC_ENTER_ROOM_REFUSED
    -3340
    Room entry request is denied. Check if enterRoom is called consecutively to enter rooms with the same ID.
    3. Device related
    Errors for relevant monitoring devices. Prompt the user via UI in case of relevant errors.
    Enumeration
    Value
    Description
    ERR_MIC_START_FAIL
    -1302
    Failed to open the mic. For example, if there is an exception for the mic's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_SPEAKER_START_FAIL
    -1321
    Failed to open the speaker. For example, if there is an exception for the speaker's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_MIC_OCCUPY
    -1319
    The mic is occupied. This occurs when, for example, the user is currently having a call on the mobile device.

    Issues with IEMs

    1. How to enable IEMs feature and set the volume?
    // Enable IEMs.
    mTRTCCloud.getAudioEffectManager().enableVoiceEarMonitor(true);
    // Set the volume of IEMs.
    mTRTCCloud.getAudioEffectManager().setVoiceEarMonitorVolume(int volume);
    Note:
    The IEMs can be set in advance without having to monitor audio routing changes. Once headphones are connected, the IEMs feature will automatically take effect.
    2. The IEMs feature does not take effect after enabled.
    Due to the high hardware delay of Bluetooth headphones, it is recommended to prompt the anchor to wear wired headphones on the user interface. Also, it should be noted that not all smartphones will achieve excellent IEMs effect after this feature is enabled. TRTC SDK has already blocked this feature on some smartphones with poor effect.
    3. High IEM delay
    Check if Bluetooth headphones are in use. Due to the high hardware delay of Bluetooth headphones, wired headphones are recommended. Additionally, you can try improving the issue of high IEM delay by enabling hardware IEM through the experimental API setSystemAudioKitEnabled. Hardware IEMs have better performance and lower delay. Software IEMs have higher delay but better compatibility. Currently, for Huawei and VIVO devices, SDK defaults to hardware IEMs. Other devices default to software IEMs. If there are compatibility issues with hardware IEMs, contact us to configure forced use of software IEMs.

    Issues with NTP sync

    1. NTP time sync finished, but result maybe inaccurate
    NTP sync is successful, but the deviation may still be more than 30 milliseconds. This indicates a poor client network environment with persistent RTT jitter.
    2. Error in AddressResolver: No address associated with hostname
    NTP sync has failed, possibly due to a temporary exception in local ISP DNS resolution under the current network environment. Try again later.
    3. NTP service retry processing logic.
    
    
    

    Issue with resource paths for playing music

    In karaoke scenarios, when using TRTC SDK to play accompaniment music, you can choose to play from local or online music resources. Currently, playback paths only support URLs of online resources, absolute paths of music files in device external storage, and application private directories. It does not support paths in directories like assets in Android development.
    You can work around this issue by copying resource files from the assets directory to either the device external storage or the application private directory beforehand. Sample code is as follows:
    public static void copyAssetsToFile(Context context, String name) {
    // The files directory under the application's directory.
    String savePath = ContextCompat.getExternalFilesDirs(context, null)[0].getAbsolutePath();
    // The cache directory under the application's directory.
    // String savePath = getApplication().getExternalCacheDir().getAbsolutePath();
    // The files directory under the application's private storage directory.
    // String savePath = getApplication().getFilesDir().getAbsolutePath();
    String filename = savePath + "/" + name;
    File dir = new File(savePath);
    // Create the directory if it does not exist.
    if (!dir.exists()) {
    dir.mkdir();
    }
    try {
    if (!(new File(filename)).exists()) {
    InputStream is = context.getResources().getAssets().open(name);
    FileOutputStream fos = new FileOutputStream(filename);
    byte[] buffer = new byte[1024];
    int count = 0;
    while ((count = is.read(buffer)) > 0) {
    fos.write(buffer, 0, count);
    }
    fos.close();
    is.close();
    }
    } catch (Exception e) {
    e.printStackTrace();
    }
    }
    Application External Storage Files Directory Path: /storage/emulated/0/Android/data/<package_name>/files/<file_name>.
    Application External Storage Cache Directory Path: /storage/emulated/0/Android/data/<package_name>/cache/file_name>.
    Application Private Storage Files Directory Path: /data/user/0/<package_name>/files/<file_name>.
    Note:
    If you provide a path that is an external storage path outside of the application's specific directories, on Android 10 and above devices, you may face denial of access to the resource. This is due to Google introducing Partition Storage, a new storage management system. You can temporarily bypass this by adding the following code inside the <application> tag in the AndroidManifest.xml file: android:requestLegacyExternalStorage="true". This attribute only takes effect on applications with targetSdkVersion 29 (Android 10), and applications with a higher version targetSdkVersion are still recommended to use the application's private or external storage paths.
    For TRTC SDK v11.5 and later, playback of local music resources on Android devices via Content URI from Content Provider components is supported.
    On Android 11 and HarmonyOS 3.0 or later, if you cannot access resource files in the external storage directory, you need to request the MANAGE_EXTERNAL_STORAGE permission:
    First, you need to add the following entry in your application's AndroidManifest file.
    <manifest ...>
    <!-- This is the permission itself -->
    <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" />
    
    <application ...>
    ...
    </application>
    </manifest>
    Then, guide users to manually grant this permission at the point in your application where it is needed.
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
    if (!Environment.isExternalStorageManager()) {
    Intent intent = new Intent(Settings.ACTION_MANAGE_APP_ALL_FILES_ACCESS_PERMISSION);
    Uri uri = Uri.fromParts("package", getPackageName(), null);
    intent.setData(uri);
    startActivity(intent);
    }
    } else {
    // For Android versions less than Android 11, you can use the old permissions model
    ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_CODE);
    }

    Issues with real-time chorus usage

    1. Why does the lead singer in real-time chorus scenarios need to use dual-instance streaming?
    In real-time chorus scenarios, to minimize end-to-end delay and achieve sync between vocals and accompaniment, a common approach is to use dual instances at the lead singer's end to separately upload vocal and accompaniment streams, while other chorus participants only upload their vocal streams and locally play the accompaniment. In this case, each chorus participant needs to subscribe to the lead singer's vocal stream, while refraining from subscribing to the lead singer's music stream. This setup can only be achieved by implementing dual-instance separate streaming.
    2. Why is it recommended to enable mixing pushback in real-time chorus scenarios?
    Having the audience pull multiple single streams at the same time is likely to result in misalignment between multiple vocal streams and accompaniment streams. Pulling a mixed stream can ensure absolute alignment of all streams and reduce downstream bandwidth.
    3. What are the uses of SEI in real-time chorus scenarios?
    Transmitting accompaniment music progress, for lyric sync on the audience's end.
    Transparently transmitting single stream volume through a mixed stream, for display as sound waves on the listener's end.
    4. Loading accompaniment music takes a long duration, causing significant playback delay?
    Loading network music resources via the SDK incurs a certain delay. It is recommended to initiate music pre-loading before starting playback.
    mTRTCCloud.getAudioEffectManager().preloadMusic(musicParam);
    5. When singing along with accompaniment, the vocals are barely audible. Is the music overwhelming the vocals?
    If the default volume settings result in the accompaniment overwhelming the vocals, it is recommended to adjust the volume balance between the music and vocals accordingly.
    // Set the local playback volume of a piece of background music.
    mTRTCCloud.getAudioEffectManager().setMusicPlayoutVolume(musicID, volume);
    // Set the remote playback volume of a specific background music.
    mTRTCCloud.getAudioEffectManager().setMusicPublishVolume(musicID, volume);
    // Set the local and remote volume of all background music.
    mTRTCCloud.getAudioEffectManager().setAllMusicVolume(volume);
    // Set the volume of voice capture.
    mTRTCCloud.getAudioEffectManager().setVoiceCaptureVolume(volume);
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support