tencent cloud

Feedback

Last updated: 2024-07-18 14:26:14

    Business Process

    This section summarizes some common business processes in the e-commerce live streaming scenario, helping you better understand the implementation process of the entire scenario.
    Anchor starts and ends live broadcast
    Anchor initiates the cross-room mic-connection PK
    The RTC audience enters the room for mic-connection
    The CDN audience enters the room for mic-connection
    Product Management for Merchandising
    The following diagram shows the process of an anchor (room owner) local preview, creating a room, entering a room to start live streaming, and leaving the room to end the live streaming.
    
    
    
    The following diagram shows the process of Anchor A inviting Anchor B for a cross-room PK. During the cross-room PK, the audiences in both rooms can see the PK mic-connection live streaming of the two room owners.
    
    
    
    The following diagram shows the process for RTC live interactive streaming audience to enter the room, apply for the mic-connection, end the mic-connection, and exit the room.
    
    
    
    The following diagram shows the process for RTC CDN live streaming audience to enter the room, apply for the mic-connection, end the mic-connection, and exit the room.
    
    
    
    The diagram below shows the process in live streaming merchandising scenarios, where the anchor edits and lists products, while audience browses and purchases products.
    
    
    

    Integration Preparations

    Step 1: activate the service

    E-commerce live streaming scenarios usually rely on paid PaaS services such as Real-Time Communication (TRTC), Beauty Special Effect, Player SDK. Among them, TRTC provides real-time audio and video interactive capabilities, Special Effect provides beauty special effects, and the player is responsible for live and on-demand playback. You can freely choose to activate the above services according to your actual business needs.
    Activate TRTC Service
    Activate Special Effect Service
    Activate Player Service
    1. First, you need to log in to the TRTC Console to create an application. You can choose to upgrade the TRTC application version according to your needs. For example, the professional edition unlocks more value-added feature services.
    
    
    
    Note:
    It is recommended to create two separate applications for testing and production environments. Each account (UIN) is provided with 10,000 minutes of free usage per month within one year.
    The TRTC monthly package is divided into Trial Version (by default), Basic Version, and Professional Version, which can unlock different value-added features and services. For details, see Version Features and Monthly Package Description.
    2. Once the application is created, you can find basic information about it under the Application Management - Application Overview section. It is important to store the SDKAppID and SDKSecretKey for later use and to avoid key leakage to prevent unauthorized traffic usage.
    
    
    
    1. Log in to Cloud Special Effect Console > Mobile License. Click Create Trial License (the free trial validity period for Trial Version License is 14 days. It is extendable once for a total of 28 days). Fill in the actual requirements for App Name, Package Name and Bundle ID. Select Special Effect, and choose the capabilities to be tested: Advanced Package S1-07, Atomic Capability X1-01, Atomic Capability X1-02, and Atomic Capability X1-03. After you check it, accurately fill in the company name, and industry type. Upload Company Service License, click OK to submit the review application, and wait for the manual review process.
    
    
    
    2. After the Trial License is successfully created, the page will display the generated License information. At this time, the License URL and License Key parameters are not yet effective and will only become active after the submission is approved. When configuring SDK initialization, you need to input both the License URL and License Key parameters. Keep the following information secure.
    
    
    
    1. Log in to VOD Console or CSS Console > License Management > Mobile License, and click Create Trial License.
    
    
    
    2. Enter the App Name,Package Name, and Bundle ID according to your actual needs, select Player Premium, and click OK.
    
    
    
    3. After the Trial License is successfully created, the page will display the generated License information. When initializing the SDK configuration, you need to enter two parameters: License Key and License URL, so carefully save the following information.
    
    
    
    Note:
    The License URL and Key for the same application are unique; after the Trial License is upgraded to the official version, the License URL and Key remain unchanged.

    Step 2: import SDK

    TRTC SDK, Special Effect SDK, and Player SDK have all been released on the mavenCentral repository. You can configure gradle to download and update automatically.
    1. Add the dependency for the appropriate version of the SDK in dependencies.
    dependencies {
    // The full feature version of SDK, including TRTC, live streaming, short video, player, and other features
    implementation 'com.tencent.liteav:LiteAVSDK_Professional:latest.release'
    // Special Effect SDK example of S1-07 package is as follows
    implementation 'com.tencent.mediacloud:TencentEffect_S1-07:latest.release'
    }
    Note:
    Besides the recommended automatic loading method, you can also choose to download the SDK and manually import it. For details, see Manually Integrate the TRTC SDK and Manually Integrate Special Effect SDK.
    The implementation of the e-commerce live streaming scenario usually relies on a combination of multiple capabilities such as TRTC and player. To avoid the symbol conflict issues that arise from single integrations, it is recommended to integrate the full feature version of the SDK.
    2. Specify the CPU architecture used by the app in defaultConfig.
    defaultConfig {
    ndk {
    abiFilters "armeabi-v7a", "arm64-v8a"
    }
    }
    Note:
    The full feature version of LiteAVSDK supports armeabi/armeabi-v7a/arm64-v8a/x86/x86_64 architectures, while Special Effect SDK only supports armeabi-v7a/arm64-v8a architectures.
    3. Click Sync Now to automatically download the SDK and integrate it into your project. If your special effect package includes dynamic effect and filter features, then you need to download the corresponding package from the SDK Download Page, unzip the free filter materials (./assets/lut) and animated stickers (./MotionRes) from the package and place them in the following directories in your project:
    Dynamic Effect: ../assets/MotionRes
    Filter: ../assets/lut

    Step 3: project configuration

    1. Configure permissions
    To configure App permissions in AndroidManifest.xml, for an e-commerce live streaming scenario, both LiteAVSDK and Special Effect SDK require the following permissions:
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.BLUETOOTH" />
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-feature android:name="android.hardware.camera.autofocus" />
    Note:
    Do not set android:hardwareAccelerated="false". Disabling hardware acceleration will result in failure to render the other party's video stream.
    LiteAVSDK does not have built-in permission request logic, so you need to declare the corresponding permissions yourself. Some permissions (such as storage, recording and camera) also require runtime dynamic requests.
    If the Android project's targetSdkVersion is 31 or higher, or if the target device runs Android 12 or a newer version, the official requirement is to dynamically request android. permission.BLUETOOTH_CONNECTpermission in the code to use the Bluetooth feature properly. For more information, see Bluetooth Permissions.
    2. Obfuscation configuration
    Since we use Java's reflection features inside the SDK, you need to add relevant SDK classes to the non-obfuscation list in the proguard-rules.pro file:
    -keep class com.tencent.** { *; }
    -keep class org.light.** { *;}
    -keep class org.libpag.** { *;}
    -keep class org.extra.** { *;}
    -keep class com.gyailib.**{ *;}
    -keep class androidx.exifinterface.** { *;}

    Step 4: authentication and authorization

    TRTC Authentication Credential
    Special Effect Authentication License
    Player Authentication License
    UserSig is a security protection signature designed by the cloud platform to prevent malicious attackers from misappropriating your cloud service usage rights. TRTC validates this authentication credential when entering a room.
    Debugging and testing stage: UserSig can be generated through Client Sample Code and Console Access, which are only used for debugging and testing.
    Production stage: It is recommended to use the server computing UserSig solution, which has a higher security level and helps prevent the client from being decompiled and reversed, to avoid the risk of key leakage.
    The specific implementation process is as follows:
    1. Before calling the initialization API of the SDK, your app must first request UserSig from your server.
    2. Your server generates the UserSig based on the SDKAppID and UserID.
    3. The server returns the generated UserSig to your app.
    4. Your app sends the obtained UserSig to the SDK through a specific API.
    5. The SDK submits the SDKAppID + UserID + UserSig to the cloud server for verification.
    6. The cloud platform verifies the validity of the UserSig.
    7. After the verification is passed, real-time audio and video services will be provided to the TRTC SDK.
    
    
    
    Note:
    The method of generating UserSig locally during the debugging and testing stage is not recommended for the online environment because it may be easily decompiled and reversed, causing key leakage.
    We provide server computation source code for UserSig in multiple programming languages (Java/GO/PHP/Nodejs/Python/C#/C++). For details, see Server Computation of UserSig.
    Before using Beauty Special Effect, you need to verify the license credential with the cloud platform. Configuring the License requires License Key and License Url. Sample code is as follows.
    import com.tencent.xmagic.telicense.TELicenseCheck;
    
    // If the purpose is just to trigger the download or update of the License, and not to care about the authentication result, then null is passed in for the fourth parameter
    TELicenseCheck.getInstance().setTELicense(context, URL, KEY, new TELicenseCheck.TELicenseCheckListener() {
    @Override
    public void onLicenseCheckFinish(int errorCode, String msg) {
    // Note: This callback does not necessarily be called on the calling thread
    if (errorCode == TELicenseCheck.ERROR_OK) {
    // Authentication successful
    } else {
    // Authentication failed
    }
    
    }
    });
    Note:
    It is recommended to trigger the authentication permission in the initialization code of related business modules, to avoid having to download the License temporarily before use. Additionally, during authentication, network permissions must be ensured.
    The actual application's Package Name must exactly match the Package Name associated with the License creation. Otherwise, it will lead to License verification failure. For details, see Authentication Error Codes.
    The live streaming and on-demand playback features require setting the License before success in playback. Otherwise, playback will fail (black screen). It needs to be set globally only once. If you have not obtained the License, you can freely apply for a Trial Version License for normal playback. The Official Version License requires purchase. After successfully applying for License, you will receive two strings: License URL and License Key.
    Before your App calls the SDK-related features, you need to configure as follows (recommended to configure in the Application class):
    public class MApplication extends Application {
    public void onCreate() {
    super.onCreate();
    String licenceURL = ""; // The obtained licence URL
    String licenceKey = ""; // The obtained licence key
    TXLiveBase.getInstance().setLicence(appContext, licenceURL, licenceKey);
    TXLiveBase.setListener(new TXLiveBaseListener() {
    @Override
    public void onLicenceLoaded(int result, String reason) {
    Log.i(TAG, "onLicenceLoaded: result:" + result + ", reason:" + reason);
    if (result != 0) {
    // If the result is not 0, it means the setting has failed, and you need to retry
    TXLiveBase.getInstance().setLicence(appContext, licenceURL, licenceKey);
    }
    }
    });
    }
    }
    After the License is successfully set (you need to wait for a while, the specific time depends on the network conditions), you can use the following method to view the License information:
    TXLiveBase.getInstance().getLicenceInfo();
    Note:
    The actual application's Package Name must exactly match the Package Name associated with the License creation. Otherwise, it will lead to License verification failure.
    The License is a strong online verification logic. When the TXLiveBase#setLicence is called after the application is started for the first time, the network must be available. At the first launch of the App, if the network permission is not yet authorized, you need to wait until the permission is granted before calling TXLiveBase#setLicence again.
    Listen to the loading result of TXLiveBase#setLicence: For onLicenceLoaded API, if it fails, you should retry and guide according to the actual situation. If it fails multiple times, you can limit the frequency and supplement with product pop-ups and other guides to allow users to check the network conditions.
    TXLiveBase#setLicence can be called multiple times. It is recommended to call TXLiveBase#setLicence when entering the main interface of the App to ensure successful loading.
    For multi-process Apps, ensure that every process using the player calls TXLiveBase#setLicence when it starts. For example, for Apps on the Android side that use a separate process for video playback, when the process is killed and restarted by the system during background playback, TXLiveBase#setLicence should also be called.

    Step 5: initialize the SDK

    Initialize the TRTC SDK
    Initialize the Special Effect SDK
    Initialize Player SDK
    // Create TRTC SDK instance (single instance pattern)
    TRTCCloud mTRTCCloud = TRTCCloud.sharedInstance(context);
    // Set event listeners
    mTRTCCloud.addListener(trtcSdkListener);
    
    // Notifications from various SDK events (e.g., error codes, warning codes, audio and video status parameters, etc.)
    private TRTCCloudListener trtcSdkListener = new TRTCCloudListener() {
    @Override
    public void onError(int errCode, String errMsg, Bundle extraInfo) {
    Log.d(TAG, errCode + errMsg);
    }
    @Override
    public void onWarning(int warningCode, String warningMsg, Bundle extraInfo) {
    Log.d(TAG, warningCode + warningMsg);
    }
    };
    
    // Remove event listener
    mTRTCCloud.removeListener(trtcSdkListener);
    // Destroy TRTC SDK instance (single instance pattern)
    TRTCCloud.destroySharedInstance();
    Note:
    It is recommended to listen to SDK events notification. Perform log printing and handling for some common errors. For details, see Error Code Table.
    import com.tencent.xmagic.XmagicApi;
    
    // Initialize the beauty SDK
    XmagicApi mXmagicApi = new XmagicApi(context, XmagicResParser.getResPath(), new XmagicApi.OnXmagicPropertyErrorListener());
    
    // During development and debugging, you can set the log level to DEBUG. For release packages, set it to WARN to avoid impacting performance
    mXmagicApi.setXmagicLogLevel(Log.WARN);
    
    // Release the beauty SDK. This method needs to be called in the GL thread
    mXmagicApi.onDestroy();
    Note:
    Before the Special Effect SDK is initialized, resource copying and other preparatory work are needed. For detailed steps, see Using the Special Effect SDK.
    On-demand Playback Scenario SDK Initialization.
    // Set the SDK connection environment (if you serve global users, configure the SDK connection environment for global connection)
    TXLiveBase.setGlobalEnv("GDPR");
    
    // Create a Player object
    TXVodPlayer mVodPlayer = new TXVodPlayer(mContext);
    
    // Add a View control for video rendering
    TXCloudVideoView mPlayerView = findViewById(R.id.video_view);
    // Associate the Player object with the View control
    mVodPlayer.setPlayerView(mPlayerView);
    
    // Player parameter configuration
    TXVodPlayConfig config = new TXVodPlayConfig();
    config.setEnableAccurateSeek(true); // Set whether to seek accurately. The default value is true
    config.setMaxCacheItems(5); // Set the number of cache files to 5
    config.setProgressInterval(200); // Set the interval for progress callbacks, in milliseconds
    config.setMaxBufferSize(50); // The maximum pre-load size, in MB
    mVodPlayer.setConfig(config);// Pass config to mVodPlayer
    
    // Player event listener
    mVodPlayer.setVodListener(new ITXVodPlayListener() {
    @Override
    public void onPlayEvent(TXVodPlayer player, int event, Bundle param) {
    // Event notification
    }
    
    @Override
    public void onNetStatus(TXVodPlayer player, Bundle bundle) {
    // Status feedback
    }
    });
    Live Streaming Scenarios SDK initialization.
    // The TXCloudVideoView for video rendering needs to be added in advance
    TXCloudVideoView mRenderView = findViewById(R.id.video_view);
    // Create a Player object
    V2TXLivePlayer mLivePlayer = new V2TXLivePlayerImpl(mContext);
    // Associate the Player object with the video rendering view
    mLivePlayer.setRenderView(mRenderView);
    
    // Player event listener
    mLivePlayer.setObserver(new V2TXLivePlayerObserver() {
    @Override
    public void onVideoLoading(V2TXLivePlayer player, Bundle extraInfo) {
    // Video loading event
    }
    @Override
    public void onVideoPlaying(V2TXLivePlayer player, boolean firstPlay, Bundle extraInfo) {
    // Video playback event
    }
    });

    Integration Process

    API Sequence Diagram

    
    
    

    Step 1: The anchor enters the room to push streams

    The control used by the TRTC SDK to display video streams only supports passing in a TXCloudVideoView type. Therefore, you need to first define the view rendering control in the layout file.
    <com.tencent.rtmp.ui.TXCloudVideoView
    android:id="@+id/live_cloud_view_main"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />
    Note:
    If you need to specifically use TextureView or SurfaceView as the view rendering control, see Advanced Features - View Rendering Control.
    1. The anchor activates local video preview and audio capture before entering the room.
    // Obtain the video rendering control for displaying the anchor's local video preview
    TXCloudVideoView mTxcvvAnchorPreviewView = findViewById(R.id.live_cloud_view_main);
    
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCCloudDef.TRTCVideoEncParam encParam = new TRTCCloudDef.TRTCVideoEncParam();
    encParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_960_540;
    encParam.videoFps = 15;
    encParam.videoBitrate = 1300;
    encParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.setVideoEncoderParam(encParam);
    
    // boolean mIsFrontCamera can specify using the front/rear camera for video capture
    mTRTCCloud.startLocalPreview(mIsFrontCamera, mTxcvvAnchorPreviewView);
    
    // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_DEFAULT);
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    Call the above API before enterRoom. The SDK will only start the camera preview and audio capture, and wait until you call enterRoom to start streaming.
    Call the above API after enterRoom. The SDK will start the camera preview and audio capture and automatically start streaming.
    2. The anchor sets rendering parameters for the local video, and the encoder output video mode (optional).
    TRTCCloudDef.TRTCRenderParams params = new TRTCCloudDef.TRTCRenderParams();
    params.mirrorType = TRTCCloudDef.TRTC_VIDEO_MIRROR_TYPE_AUTO; // Video mirror mode
    params.fillMode = TRTCCloudDef.TRTC_VIDEO_RENDER_MODE_FILL; // Video fill mode
    params.rotation = TRTCCloudDef.TRTC_VIDEO_ROTATION_0; // Video rotation angle
    // Set the rendering parameters for the local video
    mTRTCCloud.setLocalRenderParams(params);
    
    // Set the video mirror mode for the encoder output
    mTRTCCloud.setVideoEncoderMirror(boolean mirror);
    // Set the rotation of the video encoder output
    mTRTCCloud.setVideoEncoderRotation(int rotation);
    Note:
    Setting local screen rendering parameters only affects the rendering effect of the local screen.
    Setting encoder output pattern affects the viewing effect for other users in the room (as well as the cloud recording files).
    3. The anchor starts the live streaming, entering the room and start streaming.
    public void enterRoomByAnchor(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID
    params.sdkAppId = SDKAppID;
    // Specify the anchor role
    params.role = TRTCCloudDef.TRTCRoleAnchor;
    // Enter the room in an interactive live streaming scenario
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The result represents the time taken to join the room (in milliseconds)
    Log.d(TAG, "Enter room succeed");
    } else {
    // The result represents the error code fwhen you fail to enter the room
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    TRTC room IDs are divided into digit type roomId and string type strRoomId. The rooms of these two types are not interconnected. It is recommended to unify the room ID type.
    TRTC user roles are divided into anchors and audiences. Only anchors have streaming permissions. It is necessary to specify the user role when entering the room. If not specified, the default will be the anchor role.
    In e-commerce live streaming scenarios, it is recommended to choose TRTC_APP_SCENE_LIVE as the room entry mode.

    Step 2: The audience enters the room to pull streams

    1. Audience enters the TRTC room.
    public void enterRoomByAudience(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID
    params.sdkAppId = SDKAppID;
    // Specify the audience role
    params.role = TRTCCloudDef.TRTCRoleAudience;
    // Enter the room in an interactive live streaming scenario
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The result represents the time taken to join the room (in milliseconds)
    Log.d(TAG, "Enter room succeed");
    } else {
    // The result represents the error code fwhen you fail to enter the room
    Log.d(TAG, "Enter room failed");
    }
    }
    2. Audience subscribes to the anchor's audio and video streams.
    @Override
    public void onUserAudioAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes their audio
    // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio
    }
    
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes the primary video
    if (available) {
    // Subscribe to the remote user's video stream and bind the video rendering control
    mTRTCCloud.startRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG, TXCloudVideoView view);
    } else {
    // Unsubscribe to the remote user's video stream and release the rendering control
    mTRTCCloud.stopRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG);
    }
    }
    3. Audience sets the rendering mode for the remote video (optional).
    TRTCCloudDef.TRTCRenderParams params = new TRTCCloudDef.TRTCRenderParams();
    params.mirrorType = TRTCCloudDef.TRTC_VIDEO_MIRROR_TYPE_AUTO; // Video mirror mode
    params.fillMode = TRTCCloudDef.TRTC_VIDEO_RENDER_MODE_FILL; // Video fill mode
    params.rotation = TRTCCloudDef.TRTC_VIDEO_ROTATION_0; // Video rotation angle
    // Set the rendering mode for the remote video
    mTRTCCloud.setRemoteRenderParams(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG, params)

    Step 3: The audience interacts via mic-connection

    1. The audience is switched to the anchor role.
    // Switch to the anchor role
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);
    
    // Event callback for switching role
    @Override
    public void onSwitchRole(int errCode, String errMsg) {
    if (errCode == TXLiteAVCode.ERR_NULL) {
    // Role switched successfully
    }
    }
    2. The audience starts local audio and video capture and streaming.
    // Obtain the video rendering control for displaying the mic-connection audience's local video preview
    TXCloudVideoView mTxcvvAudiencePreviewView = findViewById(R.id.live_cloud_view_sub);
    
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCCloudDef.TRTCVideoEncParam encParam = new TRTCCloudDef.TRTCVideoEncParam();
    encParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_480_270;
    encParam.videoFps = 15;
    encParam.videoBitrate = 550;
    encParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.setVideoEncoderParam(encParam);
    
    // boolean mIsFrontCamera can specify using the front/rear camera for video capture
    mTRTCCloud.startLocalPreview(mIsFrontCamera, mTxcvvAudiencePreviewView);
    
    // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_DEFAULT);
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    3. The audience drops the mic and stops streaming.
    // Switch to the audience role
    mTRTCCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    
    // Event callback for switching role
    @Override
    public void onSwitchRole(int errCode, String errMsg) {
    if (errCode == TXLiteAVCode.ERR_NULL) {
    // Stop camera capture and streaming
    mTRTCCloud.stopLocalPreview();
    // Stop microphone capture and streaming
    mTRTCCloud.stopLocalAudio();
    }
    }

    Step 4: Exit and dissolve the room

    1. Exit Room
    public void exitRoom() {
    mTRTCCloud.stopLocalAudio();
    mTRTCCloud.stopLocalPreview();
    mTRTCCloud.exitRoom();
    }
    
    // Event callback for exiting the room
    @Override
    public void onExitRoom(int reason) {
    if (reason == 0) {
    Log.d(TAG, "Actively call exitRoom to exit the room");
    } else if (reason == 1) {
    Log.d(TAG, "Removed from the current room by the server");
    } else if (reason == 2) {
    Log.d(TAG, "The current room has been dissolved");
    }
    }
    Note:
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.
    If you wish to call enterRoom again or switch to another audio and video SDK, wait for the onExitRoom callback before proceeding. Otherwise, you may encounter various exceptional issues such as the camera, microphone device being forcibly occupied.
    2. Dissolve Room
    Server dissolves the room
    TRTC provides the server dissolves digit type room API DismissRoom, as well as server dissolves string type room API DismissRoomByStrRoomId. You can call the server dissolves the room API to remove all users from the room and dissolve the room.
    Client dissolves the room
    The client does not have a API to directly dissolve the room. Each client needs to call exitRoom to exit the room. Once all anchors and audience have exited, the room will automatically be dissolved according to TRTC's room lifecycle rules. For more details, see TRTC Exits Room.
    Warning:
    It is recommended that after the end of live streaming, you call the room dissolvement API on the server to ensure the room is dissolved. This will prevent audiences from accidentally entering the room and incurring unexpected charges.

    Alternative Solutions

    API Sequence Diagram

    
    
    

    Step 1: The anchor relays stream pushing

    1. Related configurations for relaying to live streaming CDN.
    Global Automatic Relayed Push
    If you need to automatically relay all anchors' audio and video streams in the room to live streaming CDN, you just need to enable Relay to CDN on the Advanced Features page in the TRTC Console.
    
    
    
    Relayed Push of the Specified Streams
    If you need to manually specify the audio and video streams to be published to live streaming CDN, or publish the mixed audio and video streams to live streaming CDN, you can do so by calling the startPublishMediaStream API. In this case, you do not need to activate global automatically relaying to CDN in the console. For a detailed introduction, see Publish Audio and Video Streams to Live Streaming CDN.
    2. The anchor activates local video preview and audio capture before entering the room.
    The control used by the TRTC SDK to display video streams only supports passing in a TXCloudVideoView type. Therefore, you need to first define the view rendering control in the layout file.
    <com.tencent.rtmp.ui.TXCloudVideoView
    android:id="@+id/live_cloud_view_main"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />
    Note:
    If you need to specifically use TextureView or SurfaceView as the view rendering control, see Advanced Features - View Rendering Control.
    // Obtain the video rendering control for displaying the anchor's local video preview
    TXCloudVideoView mTxcvvAnchorPreviewView = findViewById(R.id.live_cloud_view_main);
    
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCCloudDef.TRTCVideoEncParam encParam = new TRTCCloudDef.TRTCVideoEncParam();
    encParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_960_540;
    encParam.videoFps = 15;
    encParam.videoBitrate = 1300;
    encParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.setVideoEncoderParam(encParam);
    
    // boolean mIsFrontCamera can specify using the front/rear camera for video capture
    mTRTCCloud.startLocalPreview(mIsFrontCamera, mTxcvvAnchorPreviewView);
    
    // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_DEFAULT);
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    Call the above API before enterRoom. The SDK will only start the camera preview and audio capture, and wait until you call enterRoom to start streaming.
    Call the above API after enterRoom. The SDK will start the camera preview and audio capture and automatically start streaming.
    3. The anchor sets rendering parameters for the local screen, and the encoder output video mode.
    TRTCCloudDef.TRTCRenderParams params = new TRTCCloudDef.TRTCRenderParams();
    params.mirrorType = TRTCCloudDef.TRTC_VIDEO_MIRROR_TYPE_AUTO; // Video mirror mode
    params.fillMode = TRTCCloudDef.TRTC_VIDEO_RENDER_MODE_FILL; // Video fill mode
    params.rotation = TRTCCloudDef.TRTC_VIDEO_ROTATION_0; // Video rotation angle
    // Set the rendering parameters for the local video
    mTRTCCloud.setLocalRenderParams(params);
    
    // Set the video mirror mode for the encoder output
    mTRTCCloud.setVideoEncoderMirror(boolean mirror);
    // Set the rotation of the video encoder output
    mTRTCCloud.setVideoEncoderRotation(int rotation);
    Note:
    Setting local screen rendering parameters only affects the rendering effect of the local screen.
    Setting encoder output pattern affects the viewing effect for other users in the room (as well as the cloud recording files).
    4. The anchor starts the live streaming, entering the room and start streaming.
    public void enterRoomByAnchor(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID
    params.sdkAppId = SDKAppID;
    // Specify the anchor role
    params.role = TRTCCloudDef.TRTCRoleAnchor;
    // Enter the room in an interactive live streaming scenario
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Event callback for the result of entering the room
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The result represents the time taken to join the room (in milliseconds)
    Log.d(TAG, "Enter room succeed");
    } else {
    // The result represents the error code fwhen you fail to enter the room
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    TRTC room IDs are divided into digit type roomId and string type strRoomId. The rooms of these two types are not interconnected. It is recommended to unify the room ID type.
    TRTC user roles are divided into anchors and audiences. Only anchors have streaming permissions. It is necessary to specify the user role when entering the room. If not specified, the default will be the anchor role.
    In e-commerce live streaming scenarios, it is recommended to choose TRTC_APP_SCENE_LIVE as the room entry mode.
    5. The anchor relays the audio and video streams to the live streaming CDN.
    public void startPublishMediaToCDN(String streamName) {
    // Set the expiration time for the push URLs
    long txTime = (System.currentTimeMillis() / 1000) + (24 * 60 * 60);
    // Generate authentication information. The getSafeUrl method can be obtained in the CSS console - Domain Name Management - Push Configuration - Sample Code for Push URLs
    String secretParam = UrlHelper.getSafeUrl(LIVE_URL_KEY, streamName, txTime);
    
    // The target URLs for media stream publication
    TRTCCloudDef.TRTCPublishTarget target = new TRTCCloudDef.TRTCPublishTarget();
    // The target URLs are set for relaying to CDN
    target.mode = TRTCCloudDef.TRTC_PublishBigStream_ToCdn;
    TRTCCloudDef.TRTCPublishCdnUrl cdnUrl = new TRTCCloudDef.TRTCPublishCdnUrl();
    // Construct push URLs (in RTMP format) to the live streaming service provider
    cdnUrl.rtmpUrl = "rtmp://" + PUSH_DOMAIN + "/live/" + streamName + "?" + secretParam;
    // True means the cloud platform CSS, and false means third-party live streaming services
    cdnUrl.isInternalLine = true;
    // Multiple CDN push URLs can be added
    target.cdnUrlList.add(cdnUrl);
    
    // Set media stream encoding output parameters (can be defined according to business needs)
    TRTCCloudDef.TRTCStreamEncoderParam trtcStreamEncoderParam = new TRTCCloudDef.TRTCStreamEncoderParam();
    trtcStreamEncoderParam.audioEncodedChannelNum = 1;
    trtcStreamEncoderParam.audioEncodedKbps = 50;
    trtcStreamEncoderParam.audioEncodedCodecType = 0;
    trtcStreamEncoderParam.audioEncodedSampleRate = 48000;
    trtcStreamEncoderParam.videoEncodedFPS = 15;
    trtcStreamEncoderParam.videoEncodedGOP = 2;
    trtcStreamEncoderParam.videoEncodedKbps = 1300;
    trtcStreamEncoderParam.videoEncodedWidth = 540;
    trtcStreamEncoderParam.videoEncodedHeight = 960;
    
    // Start publishing media stream
    mTRTCCloud.startPublishMediaStream(target, trtcStreamEncoderParam, null);
    }
    Note:
    During single-anchor live streaming, only initiate the relayed push task. When there is an audience mic-connection or anchor PK, update this task to a mixed-stream transcoding task.
    Information of push authentication KEY LIVE_URL_KEY and push domain name PUSH_DOMAIN can be obtained on the Domain Name Management page in the CSS Console.
    After the media stream is published, SDK will provide the backend-initiated task identifier (taskId) through the callback onStartPublishMediaStream.
    @Override
    public void onStartPublishMediaStream(String taskId, int code, String message, Bundle extraInfo) {
    // taskId: When the request is successful, TRTC backend will provide the taskId of this task in the callback. You can later use this taskId with updatePublishMediaStream and stopPublishMediaStream to update and stop
    // code: Callback result. 0 means success and other values mean failure
    }

    Step 2: The audience pulls streams for playback

    CDN audience do not need to enter the TRTC room; they can directly pull the anchor's CDN stream for playback. In the live streaming playback scenario, see Initialize SDK for player initialization steps.
    // Set delay management mode (optional)
    mLivePlayer.setCacheParams(1.0f, 5.0f); // Auto mode
    mLivePlayer.setCacheParams(1.0f, 1.0f); // Speed mode
    mLivePlayer.setCacheParams(5.0f, 5.0f); // Smooth mode
    
    // Concatenate the pull URLs for playback
    String flvURL = "http://" + PLAY_DOMAIN + "/live/" + streamName + ".flv"; // FLV URL
    String hlsURL = "http://" + PLAY_DOMAIN + "/live/" + streamName + ".m3u8"; // HLS URL
    String rtmpURL = "rtmp://" + PLAY_DOMAIN + "/live/" + streamName; // RTMP URL
    String webrtcURL = "webrtc://" + PLAY_DOMAIN + "/live/" + streamName; // WebRTC URL
    
    // Start playing
    mLivePlayer.startLivePlay(flvURL);
    
    // Custom set fill mode (optional)
    mLivePlayer.setRenderFillMode(V2TXLiveFillModeFit);
    // Customize video rendering direction (optional)
    mLivePlayer.setRenderRotation(V2TXLiveRotation0);
    Note:
    The playback domain name PLAY_DOMAIN requires you to Add Your Own Domain in the CSS console for live streaming playback. You also should configure domain CNAME.
    To use the live streaming, you need to configure the player's Licence authorization in advance, or the playback will fail (black screen). For details, see Authentication and Authorization.

    Step 3: The audience interacts via mic-connection

    1. The mic-connection audiences need to enter the TRTC room for real-time interaction with the anchor.
    // Enter the TRTC room and start streaming
    public void enterRoom(String roomId, String userId) {
    TRTCCloudDef.TRTCParams params = new TRTCCloudDef.TRTCParams();
    // Take the room ID string as an example
    params.strRoomId = roomId;
    params.userId = userId;
    // UserSig obtained from the business backend
    params.userSig = getUserSig(userId);
    // Replace with your SDKAppID
    params.sdkAppId = SDKAppID;
    // Specify the anchor role
    params.role = TRTCCloudDef.TRTCRoleAnchor;
    // Enable local audio and video capture
    startLocalMedia();
    // In an interactive live streaming scenario, enter the room and push streams
    mTRTCCloud.enterRoom(params, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    
    // Enable local video preview and audio capture
    public void startLocalMedia() {
    // Obtain the video rendering control for displaying the mic-connection audience's local video preview
    TXCloudVideoView mTxcvvAudiencePreviewView = findViewById(R.id.live_cloud_view_sub);
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCCloudDef.TRTCVideoEncParam encParam = new TRTCCloudDef.TRTCVideoEncParam();
    encParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_480_270;
    encParam.videoFps = 15;
    encParam.videoBitrate = 550;
    encParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.setVideoEncoderParam(encParam);
    // boolean mIsFrontCamera can specify using the front/rear camera for video capture
    mTRTCCloud.startLocalPreview(mIsFrontCamera, mTxcvvAudiencePreviewView);
    // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC
    mTRTCCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_DEFAULT);
    }
    
    // Event callback for the result of entering the room
    @Override
    public void onEnterRoom(long result) {
    if (result > 0) {
    // The result represents the time taken to join the room (in milliseconds)
    Log.d(TAG, "Enter room succeed");
    } else {
    // The result represents the error code fwhen you fail to enter the room
    Log.d(TAG, "Enter room failed");
    }
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    2. The mic-connection audience start subscribing to the anchor's audio and video streams after they successfully enter the room.
    @Override
    public void onUserAudioAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes their audio
    // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio
    }
    
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes the primary video
    if (available) {
    // Subscribe to the remote user's video stream and bind the video rendering control
    mTRTCCloud.startRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG, TXCloudVideoView view);
    } else {
    // Unsubscribe to the remote user's video stream and release the rendering control
    mTRTCCloud.stopRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG);
    }
    }
    
    @Override
    public void onFirstVideoFrame(String userId, int streamType, int width, int height) {
    // The SDK starts rendering the first frame of the local or remote user's video
    if (!userId.isEmpty()) {
    // Stop playing the CDN stream upon receiving the first frame of the anchor's video
    mLivePlayer.stopPlay();
    }
    }
    Note:
    TRTC stream pulling startRemoteView can directly reuse the video rendering control previously used by the CDN stream pullingsetRenderView.
    To avoid video interruptions when switching between stream pullers, it is recommended to wait until the TRTC first frame callback onFirstVideoFrame is received before stopping the CDN stream pulling.
    3. The anchor updates the publication of mixed media streams.
    // Event callback for the mic-connection audience's room entry
    @Override
    public void onRemoteUserEnterRoom(String userId) {
    if (!mixUserList.contains(userId)) {
    mixUserList.add(userId);
    }
    updatePublishMediaToCDN(streamName, mixUserList, taskId);
    }
    
    // Event callback for updating the media stream
    @Override
    public void onUpdatePublishMediaStream(String taskId, int code, String message, Bundle extraInfo) {
    // When you call the publish media stream API (updatePublishMediaStream), the taskId you provide will be returned to you through this callback. It is used to identify which update request the callback belongs to
    // code: Callback result. 0 means success and other values mean failure
    }
    
    // Update the publication of mixed media streams to the live streaming CDN
    public void updatePublishMediaToCDN(String streamName, List<String> mixUserList, String taskId) {
    // Set the expiration time for the push URLs
    long txTime = (System.currentTimeMillis() / 1000) + (24 * 60 * 60);
    // Generate authentication information. The getSafeUrl method can be obtained in the CSS console - Domain Name Management - Push Configuration - Sample Code for Push URLs
    String secretParam = UrlHelper.getSafeUrl(LIVE_URL_KEY, streamName, txTime);
    
    // The target URLs for media stream publication
    TRTCCloudDef.TRTCPublishTarget target = new TRTCCloudDef.TRTCPublishTarget();
    // The target URLs are set for relaying the mixed streams to CDN
    target.mode = TRTCCloudDef.TRTC_PublishMixStream_ToCdn;
    TRTCCloudDef.TRTCPublishCdnUrl cdnUrl = new TRTCCloudDef.TRTCPublishCdnUrl();
    // Construct push URLs (in RTMP format) to the live streaming service provider
    cdnUrl.rtmpUrl = "rtmp://" + PUSH_DOMAIN + "/live/" + streamName + "?" + secretParam;
    // True means the cloud platform CSS, and false means third-party live streaming services
    cdnUrl.isInternalLine = true;
    // Multiple CDN push URLs can be added
    target.cdnUrlList.add(cdnUrl);
    
    // Set media stream encoding output parameters
    TRTCCloudDef.TRTCStreamEncoderParam trtcStreamEncoderParam = new TRTCCloudDef.TRTCStreamEncoderParam();
    trtcStreamEncoderParam.audioEncodedChannelNum = 1;
    trtcStreamEncoderParam.audioEncodedKbps = 50;
    trtcStreamEncoderParam.audioEncodedCodecType = 0;
    trtcStreamEncoderParam.audioEncodedSampleRate = 48000;
    trtcStreamEncoderParam.videoEncodedFPS = 15;
    trtcStreamEncoderParam.videoEncodedGOP = 2;
    trtcStreamEncoderParam.videoEncodedKbps = 1300;
    trtcStreamEncoderParam.videoEncodedWidth = 540;
    trtcStreamEncoderParam.videoEncodedHeight = 960;
    // Configuration parameters for media stream transcoding
    TRTCCloudDef.TRTCStreamMixingConfig trtcStreamMixingConfig = new TRTCCloudDef.TRTCStreamMixingConfig();
    if (mixUserList != null) {
    ArrayList<TRTCCloudDef.TRTCUser> audioMixUserList = new ArrayList<>();
    ArrayList<TRTCCloudDef.TRTCVideoLayout> videoLayoutList = new ArrayList<>();
    for (int i = 0; i < mixUserList.size() && i < 16; i++) {
    TRTCCloudDef.TRTCUser user = new TRTCCloudDef.TRTCUser();
    // The integer room number is intRoomId
    user.strRoomId = mRoomId;
    user.userId = mixUserList.get(i);
    audioMixUserList.add(user);
    TRTCCloudDef.TRTCVideoLayout videoLayout = new TRTCCloudDef.TRTCVideoLayout();
    if (mixUserList.get(i).equals(mUserId)) {
    // The layout for the anchor's video
    videoLayout.x = 0;
    videoLayout.y = 0;
    videoLayout.width = 540;
    videoLayout.height = 960;
    videoLayout.zOrder = 0;
    } else {
    // The layout for the mic-connection audience's video
    videoLayout.x = 400;
    videoLayout.y = 5 + i * 245;
    videoLayout.width = 135;
    videoLayout.height = 240;
    videoLayout.zOrder = 1;
    }
    videoLayout.fixedVideoUser = user;
    videoLayout.fixedVideoStreamType = TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG;
    videoLayoutList.add(videoLayout);
    }
    // Specify the information for each input audio stream in the transcoding stream
    trtcStreamMixingConfig.audioMixUserList = audioMixUserList;
    // Specify the information of position, size, layer, and stream type for each video screen in the mixed display
    trtcStreamMixingConfig.videoLayoutList = videoLayoutList;
    }
    
    // Update the published media stream
    mTRTCCloud.updatePublishMediaStream(taskId, target, trtcStreamEncoderParam, trtcStreamMixingConfig);
    }
    Note:
    To ensure continuous CDN playback without stream disconnection, you need to keep the media stream encoding output parameter trtcStreamEncoderParam and the stream name streamName unchanged.
    Media stream encoding output parameters and mixed display layout parameters can be customized according to business needs. Currently, up to 16 channels of audio and video input are supported. If a user only provides audio, it will still be counted as one channel.
    Switching between audio only, audio and video, and video only is not supported within the same task.
    4. The off-streaming audience exit the room, and the anchor updates the mixed stream task.
    // The reusable TRTC video rendering control
    mLivePlayer.setRenderView(TXCloudVideoView view);
    // Restart playing CDN media stream
    mLivePlayer.startLivePlay(URL);
    
    // Callback for player event listener
    mLivePlayer.setObserver(new V2TXLivePlayerObserver() {
    @Override
    public void onVideoLoading(V2TXLivePlayer player, Bundle extraInfo) {
    // Video loading event
    }
    @Override
    public void onVideoPlaying(V2TXLivePlayer player, boolean firstPlay, Bundle extraInfo) {
    // Video playback event
    if (firstPlay) {
    mTRTCCloud.stopAllRemoteView();
    mTRTCCloud.stopLocalAudio();
    mTRTCCloud.stopLocalPreview();
    mTRTCCloud.exitRoom();
    }
    }
    });
    Note:
    To avoid video interruptions when switching the stream puller, it is recommended to wait for the player's video playback event onVideoPlaying before exiting the TRTC room.
    // Event callback for the mic-connection audience's room exit
    @Override
    public void onRemoteUserLeaveRoom(String userId, int reason) {
    if (mixUserList.contains(userId)) {
    mixUserList.remove(userId);
    }
    // The anchor updates the mixed stream task
    updatePublishMediaToCDN(streamName, mixUserList, taskId);
    }
    
    // Event callback for updating the media stream
    @Override
    public void onUpdatePublishMediaStream(String taskId, int code, String message, Bundle extraInfo) {
    // When you call the publish media stream API (updatePublishMediaStream), the taskId you provide will be returned to you through this callback. It is used to identify which update request the callback belongs to
    // code: Callback result. 0 means success and other values mean failure
    }

    Step 4: The anchor stops the live streaming and exits the room

    public void exitRoom() {
    // Stop all published media streams
    mTRTCCloud.stopPublishMediaStream("");
    mTRTCCloud.stopLocalAudio();
    mTRTCCloud.stopLocalPreview();
    mTRTCCloud.exitRoom();
    }
    
    // Event callback for stopping media streams
    @Override
    public void onStopPublishMediaStream(String taskId, int code, String message, Bundle extraInfo) {
    // When you call stopPublishMediaStream, the taskId you provide will be returned to you through this callback. It is used to identify which stop request the callback belongs to
    // code: Callback result. 0 means success and other values mean failure
    }
    
    // Event callback for exiting the room
    @Override
    public void onExitRoom(int reason) {
    if (reason == 0) {
    Log.d(TAG, "Actively call exitRoom to exit the room");
    } else if (reason == 1) {
    Log.d(TAG, "Removed from the current room by the server");
    } else if (reason == 2) {
    Log.d(TAG, "The current room has been dissolved");
    }
    }
    Note:
    To stop publishing media streams, enter an empty string for taskId. This will stop all the media streams you have published.
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.

    Advanced Features

    Product Information Pop-up

    The Product Information Pop-up feature can be implemented through IM Custom Message or SEI Information. Below are the specific information of the two implementation methods.

    Custom Message

    Custom messages depend on Instant Messaging (IM). You need to activate the service and import the IM SDK in advance. For detailed guidelines, see Voice Chat Room Connection Guide - Connection Preparation.
    1. Send Custom Messages
    Method 1: The anchor sends product pop-up related custom group messages on the client.
    // Construct product pop-up message body
    JSONObject jsonObject = new JSONObject();
    try {
    jsonObject.put("cmd", "item_popup_msg");
    JSONObject msgJsonObject = new JSONObject();
    msgJsonObject.put("itemNumber", 1); // Item number
    msgJsonObject.put("itemPrice", 199.0); // Item price
    msgJsonObject.put("itemTitle", "xxx"); // Item title
    msgJsonObject.put("itemUrl", "xxx");// Item URL
    jsonObject.put("msg", msgJsonObject);
    } catch (JSONException e) {
    e.printStackTrace();
    }
    String data = jsonObject.toString();
    
    // Send custom group messages (it is recommended that product pop-up messages should be set to high priority)
    V2TIMManager.getInstance().sendGroupCustomMessage(data.getBytes(), mRoomId,
    V2TIMMessage.V2TIM_PRIORITY_HIGH, new V2TIMValueCallback<V2TIMMessage>() {
    @Override
    public void onError(int i, String s) {
    // Failed to send product pop-up message
    }
    
    @Override
    public void onSuccess(V2TIMMessage v2TIMMessage) {
    // Successfully sent product pop-up message
    // Locally rendering of product pop-up effect
    }
    });
    Method 2: The backend operators sends product pop-up related custom group messages on the server.
    Request URL sample:
    https://xxxxxx/v4/group_open_http_svc/send_group_msg?sdkappid=88888888&identifier=admin&usersig=xxx&random=99999999&contenttype=json
    Request packet body sample:
    {
    "GroupId": "@TGS#12DEVUDHQ",
    "Random": 2784275388,
    "MsgPriority": "High", // The priority of the message. It is recommended to set product pop-up messages to high priority
    "MsgBody": [
    {
    "MsgType": "TIMCustomElem",
    "MsgContent": {
    // itemNumber: item number; itemPrice: item price; itemTitle: item title; itemUrl: item URL
    "Data": "{\\"cmd\\": \\"item_popup_msg\\", \\"msg\\": {\\"itemNumber\\": 1, \\"itemPrice\\": 199.0, \\"itemTitle\\": \\"xxx\\", \\"itemUrl\\": \\"xxx\\"}}"
    }
    }
    ]
    }
    2. Receive Custom Messages
    Other users in the room receive callback for custom group messages, then proceed with message parsing and product pop-up effect rendering.
    // Custom group messages received
    V2TIMManager.getInstance().addSimpleMsgListener(new V2TIMSimpleMsgListener() {
    @Override
    public void onRecvGroupCustomMessage(String msgID, String groupID, V2TIMGroupMemberInfo sender, byte[] customData) {
    String customStr = new String(customData);
    if (!customStr.isEmpty()) {
    try {
    JSONObject jsonObject = new JSONObject(customStr);
    String command = jsonObject.getString("cmd");
    JSONObject messageJsonObject = jsonObject.getJSONObject("msg");
    if (command.equals("item_popup_msg")) {
    int itemNumber = messageJsonObject.getInt("itemNumber"); // Item number
    double itemPrice = messageJsonObject.getDouble("itemPrice"); // Item price
    String itemTitle = messageJsonObject.getString("itemTitle"); // Item title
    String itemUrl = messageJsonObject.getString("itemUrl"); // Item URL
    // Render product pop-up effect based on item number, item price, item title, and item URL
    }
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    }
    });

    SEI Information

    SEI information will be inserted into the anchor's video stream for transmission, achieving precise sync between the product information pop-up and the anchor's live streaming.
    1. Send SEI Information
    The anchor sends SEI messages related to product pop-up on the TRTC client.
    // Construct product pop-up message body
    JSONObject jsonObject = new JSONObject();
    try {
    jsonObject.put("cmd", "item_popup_msg");
    JSONObject msgJsonObject = new JSONObject();
    msgJsonObject.put("itemNumber", 1); // Item number
    msgJsonObject.put("itemPrice", 199.0); // Item price
    msgJsonObject.put("itemTitle", "xxx"); // Item title
    msgJsonObject.put("itemUrl", "xxx");// Item URL
    jsonObject.put("msg", msgJsonObject);
    } catch (JSONException e) {
    e.printStackTrace();
    }
    String data = jsonObject.toString();
    
    // Send SEI information
    mTRTCCloud.sendSEIMsg(data.getBytes(), 1);
    2. Receive SEI Information
    Method 1: The audience receives SEI messages on the TRTC client, then proceeds with message parsing and product pop-up effect rendering.
    mTRTCCloud.setListener(new TRTCCloudListener() {
    @Override
    public void onRecvSEIMsg(String userId, byte[] data) {
    String dataStr = new String(data);
    if (!dataStr.isEmpty()) {
    try {
    JSONObject jsonObject = new JSONObject(dataStr);
    String command = jsonObject.getString("cmd");
    JSONObject messageJsonObject = jsonObject.getJSONObject("msg");
    if (command.equals("item_popup_msg")) {
    int itemNumber = messageJsonObject.getInt("itemNumber"); // Item number
    double itemPrice = messageJsonObject.getDouble("itemPrice"); // Item price
    String itemTitle = messageJsonObject.getString("itemTitle"); // Item title
    String itemUrl = messageJsonObject.getString("itemUrl"); // Item URL
    // Render product pop-up effect based on item number, item price, item title, and item URL
    }
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    }
    });
    Method 2: The audience receives SEI messages on the CDN stream player, then proceeds with message parsing and product pop-up effect rendering.
    // Set the PayloadType for sending SEI messages in TRTC
    mTRTCCloud.callExperimentalAPI("{\\"api\\":\\"setSEIPayloadType\\",\\"params\\":{\\"payloadType\\":5}}");
    
    // Enable receiving SEI messages on the player and set the PayloadType
    mLivePlayer.enableReceiveSeiMessage(true, 5);
    
    // SEI message callback and parsing
    mLivePlayer.setObserver(new V2TXLivePlayerObserver() {
    @Override
    public void onReceiveSeiMessage(V2TXLivePlayer player, int payloadType, byte[] data) {
    String dataStr = new String(data);
    if (!dataStr.isEmpty()) {
    try {
    JSONObject jsonObject = new JSONObject(dataStr);
    String command = jsonObject.getString("cmd");
    JSONObject messageJsonObject = jsonObject.getJSONObject("msg");
    if (command.equals("item_popup_msg")) {
    int itemNumber = messageJsonObject.getInt("itemNumber"); // Item number
    double itemPrice = messageJsonObject.getDouble("itemPrice"); // Item price
    String itemTitle = messageJsonObject.getString("itemTitle"); // Item title
    String itemUrl = messageJsonObject.getString("itemUrl"); // Item URL
    // Render product pop-up effect based on item number, item price, item title, and item URL
    }
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    }
    });
    Note:
    It is necessary to ensure that the SEI PayloadType of the TRTC sender and the player receiver are consistent, so that the audience can successfully receive the SEI messages relayed via TRTC.

    Product Explanation Replay

    By playing pre-recorded product explanation videos, the product explanation replay feature is implemented.
    First, it is necessary to initialize the player, then start playing the recorded video. TXVodPlayer supports two playback modes, which you can choose according to your needs:
    Using the URL method
    Using the FileId method
    // Play URL video resource
    String url = "http://1252463788.vod2.myqcloud.com/xxxxx/v.f20.mp4";
    mVodPlayer.startVodPlay(url);
    
    // Play local video resources
    String localFile = "/sdcard/video.mp4";
    mVodPlayer.startVodPlay(localFile);
    // Recommended to use the new API below
    // The psign means player signature. For more information about the signature and how to generate it, see: https://www.tencentcloud.com/document/product/266/42436?from_cn_redirect=1
    TXPlayInfoParams playInfoParam = new TXPlayInfoParams(1252463788, // The appId of the cloud platform account
    "4564972819220421305", // The fileId of video
    "psignxxxxxxx"); // Player signature
    mVodPlayer.startVodPlay(playInfoParam);
    
    // Old API, not recommended
    TXPlayerAuthBuilder authBuilder = new TXPlayerAuthBuilder();
    authBuilder.setAppId(1252463788);
    authBuilder.setFileId("4564972819220421305");
    mVodPlayer.startVodPlay(authBuilder);
    Playback control: adjust the progress, pause playback, resume playback, and end playback.
    // Adjust the progress (seconds)
    mVodPlayer.seek(time);
    
    // Pause playback
    mVodPlayer.pause();
    
    // Resume playback
    mVodPlayer.resume();
    
    // End playback (clear the last frame)
    mVodPlayer.stopPlay(true);
    Note:
    When stopping playback, remember to destroy the View control, especially before the next startVodPlay. Otherwise, it will cause a large amount of memory leak and screen flash.
    Also, when exiting the playback interface, remember to call the rendering View's onDestroy() function. Otherwise, it may cause memory leaks and a "Receiver not registered" warning.
    @Override
    public void onDestroy() {
    super.onDestroy();
    mVodPlayer.stopPlay(true); // True means clearing the last frame
    mPlayerView.onDestroy();
    }

    Cross-room Mic-connection PK

    1. Either party initiates the cross-room mic-connection PK.
    public void connectOtherRoom(String roomId, String userId) {
    try {
    JSONObject jsonObj = new JSONObject();
    // The digit room number is roomId
    jsonObj.put("strRoomId", roomId);
    jsonObj.put("userId", userId);
    mTRTCCloud.ConnectOtherRoom(jsonObj.toString());
    } catch (JSONException e) {
    e.printStackTrace();
    }
    }
    
    // Result callback for requesting cross-room mic-connection
    @Override
    public void onConnectOtherRoom(String userId, int errCode, String errMsg) {
    // The user ID of the anchor in the other room you want to initiate the cross-room link-up
    // Error code. ERR_NULL indicates the request is successful
    // Error message
    }
    Note:
    Both local and remote users participating in the cross-room mic-connection must be in the anchor role and must have audio/video uplink capabilities.
    Cross-room mic-connection PK with multiple room anchors can be achieved by calling ConnectOtherRoom() multiple times. Currently, a room can connect with up to three other room anchors at most, and up to 10 anchors in a room can conduct cross-room mic-connection competition with anchors in other rooms.
    2. All users in both rooms will receive a callback indicating that the audio and video streams from the PK anchor in the other room are available.
    @Override
    public void onUserAudioAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes their audio
    // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio
    }
    
    @Override
    public void onUserVideoAvailable(String userId, boolean available) {
    // The remote user publishes/unpublishes the primary video
    if (available) {
    // Subscribe to the remote user's video stream and bind the video rendering control
    mTRTCCloud.startRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG, TXCloudVideoView view);
    } else {
    // Unsubscribe to the remote user's video stream and release the rendering control
    mTRTCCloud.stopRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG);
    }
    }
    3. Either party exits the cross-room mic-connection PK.
    // Exiting cross-room mic-connection
    mTRTCCloud.DisconnectOtherRoom();
    
    // Result callback for exiting cross-room mic-connection
    @Override
    public void onDisConnectOtherRoom(int errCode, String errMsg) {
    super.onDisConnectOtherRoom(errCode, errMsg);
    }
    Note:
    After calling DisconnectOtherRoom(), you may exit the cross-room mic-connection PK with all other room anchors.
    Either the initiator or the receiver can call DisconnectOtherRoom() to exit the cross-room mic-connection PK.

    Third-Party Beauty Feature Integration

    TRTC supports integrating third-party beauty effect products. Use the example of Special Effect to demonstrate the process of integrating the third-party beauty features.
    1. Integrate the Special Effect SDK, and apply for an authorization license. For details, see Integration Preparation for steps.
    2. Resource copying (if any). If your resource files are built into the assets directory, you need to copy them to the App's private directory before use.
    XmagicResParser.setResPath(new File(getFilesDir(), "xmagic").getAbsolutePath());
    //loading
    
    // Copy resource files to the private directory. Only need to do it once
    XmagicResParser.copyRes(getApplicationContext());
    If your resource file is dynamically downloaded from the internet, you need to set the resource file path after the download is successful.
    XmagicResParser.setResPath (local path of the downloaded resource file);
    3. Set the video data callback for third-party beauty features. Pass the results of the beauty SDK processing each frame of data into the TRTC SDK for rendering processing.
    mTRTCCloud.setLocalVideoProcessListener(TRTCCloudDef.TRTC_VIDEO_PIXEL_FORMAT_Texture_2D, TRTCCloudDef.TRTC_VIDEO_BUFFER_TYPE_TEXTURE, new TRTCCloudListener.TRTCVideoFrameListener() {
    @Override
    public void onGLContextCreated() {
    // The OpenGL environment has already been set up internally within the SDK. At this point, the initialization of third-party beauty features can be done
    if (mXmagicApi == null) {
    XmagicApi mXmagicApi = new XmagicApi(context, XmagicResParser.getResPath(), new XmagicApi.OnXmagicPropertyErrorListener());
    } else {
    mXmagicApi.onResume();
    }
    }
    
    @Override
    public int onProcessVideoFrame(TRTCCloudDef.TRTCVideoFrame srcFrame, TRTCCloudDef.TRTCVideoFrame dstFrame) {
    // Callback for integrating with third-party beauty components for video processing
    if (mXmagicApi != null) {
    dstFrame.texture.textureId = mXmagicApi.process(srcFrame.texture.textureId, srcFrame.width, srcFrame.height);
    }
    return 0;
    }
    
    @Override
    public void onGLContextDestory() {
    // The internal OpenGL environment within the SDK has been terminated. At this point, proceed to clean up resources for third-party beauty features
    mXmagicApi.onDestroy();
    }
    });
    Note:
    Steps 1 and 2 vary depending on the different third-party beauty products, while Step 3 is a general and important step for integrating third-party beauty features into TRTC.
    For scenario-specific integration guidelines of beauty effects, see Integrating Special Effect into TRTC SDK. For guidelines on integrating beauty effects independently, see Integrating Special Effect SDK.

    Dual-Stream Encoding Mode

    When the dual-stream encoding mode is enabled, the current user's encoder outputs two video streams, a high-definition large screen and a low-definition small screen, at the same time (but only one audio stream). In this way, other users in the room can choose to subscribe to the high-definition large screen or low-definition small screen based on their network conditions or screen sizes.
    1. Enable large-and-small-screen dual-stream encoding mode.
    public void enableDualStreamMode(boolean enable) {
    // Video encoding parameters for the small stream (customizable).
    TRTCCloudDef.TRTCVideoEncParam smallVideoEncParam = new TRTCCloudDef.TRTCVideoEncParam();
    smallVideoEncParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_480_270;
    smallVideoEncParam.videoFps = 15;
    smallVideoEncParam.videoBitrate = 550;
    smallVideoEncParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.enableEncSmallVideoStream(enable, smallVideoEncParam);
    }
    Note:
    When the dual-stream encoding mode is enabled, it consumes more CPU and network bandwidth. Therefore, it may be considered for use on Mac, Windows, or high-performance Pads. It is not recommended for mobile devices.
    2. Select the type of remote user's video stream to pull.
    // Optional video stream types when you subscribe to a remote user's video stream
    mTRTCCloud.startRemoteView(userId, streamType, videoView);
    
    // You can switch the size of the specified remote user's screen at any time
    mTRTCCloud.setRemoteVideoStreamType(userId, streamType);
    Note:
    When the dual-stream encoding mode is enabled, you can specify the video stream type as TRTC_VIDEO_STREAM_TYPE_SMALL with streamType to pull a low-quality small video for viewing.

    View rendering control

    In TRTC, there are many APIs that require you to control the video screen. All these APIs require you to specify a video rendering control. On the Android platform, TXCloudVideoView is used as the video rendering control, and both SurfaceView and TextureView rendering schemes are supported. Below are the methods for specifying the type of rendering control and updating the video rendering control.
    1. If you want mandatory use of a certain scheme, or to convert the local video rendering control to TXCloudVideoView, you can code as follows.
    // Mandatory use of TextureView
    TextureView textureView = findViewById(R.id.texture_view);
    TXCloudVideoView cloudVideoView = new TXCloudVideoView(context);
    cloudVideoView.addVideoView(textureView);
    
    // Mandatory use of SurfaceView
    SurfaceView surfaceView = findViewById(R.id.surface_view);
    TXCloudVideoView cloudVideoView = new TXCloudVideoView(surfaceView);
    2. If your business involves scenarios of switching display zones, you can use the TRTC SDK to update the local preview screen and update the remote user's video rendering control feature.
    // Update local preview screen rendering control
    mTRTCCloud.updateLocalView(videoView);
    
    // Update the remote user's video rendering control
    mTRTCCloud.updateRemoteView(userId, streamType, videoView);
    Note:
    The pass-through parameter videoView refers to the target video rendering control. And streamType only supports TRTC_VIDEO_STREAM_TYPE_BIG and TRTC_VIDEO_STREAM_TYPE_SUB.

    Exception Handling

    Exception error handling

    When the TRTC SDK encounters an unrecoverable error, the error is thrown in the onError callback. For details, see Error Code Table.
    1. UserSig related
    UserSig verification failure leads to room-entering failure. You can use the UserSig tool for verification.
    Enumeration
    Value
    Description
    ERR_TRTC_INVALID_USER_SIG
    -3320
    Room entry parameter userSig is incorrect. Check if TRTCParams.userSig is empty.
    ERR_TRTC_USER_SIG_CHECK_FAILED
    -100018
    UserSig verification failed. Check if the parameter TRTCParams.userSig is filled in correctly or has expired.
    2. Room entry and exit related
    If room entry is failed, you should first verify the correctness of the room entry parameters. It is essential that the room entry and exit APIs are called in a paired manner. This means that, even in the event of a failed room entry, the room exit API must still be called.
    Enumeration
    Value
    Description
    ERR_TRTC_CONNECT_SERVER_TIMEOUT
    -3308
    Room entry request timed out. Check if your internet connection is lost or if a VPN is enabled. You may also attempt to switch to 4G for testing.
    ERR_TRTC_INVALID_SDK_APPID
    -3317
    Room entry parameter sdkAppId is incorrect. Check if TRTCParams.sdkAppId is empty
    ERR_TRTC_INVALID_ROOM_ID
    -3318
    Room entry parameter roomId is incorrect.Check if TRTCParams.roomId or TRTCParams.strRoomId is empty. Nnote that roomId and strRoomId cannot be used interchangeably.
    ERR_TRTC_INVALID_USER_ID
    -3319
    Room entry parameter userId is incorrect. Check if TRTCParams.userId is empty.
    ERR_TRTC_ENTER_ROOM_REFUSED
    -3340
    Room entry request was denied. Check if enterRoom is called consecutively to enter rooms with the same ID.
    3. Device related
    Errors for related monitoring devices. Prompt the user via UI in case of relevant errors.
    Enumeration
    Value
    Description
    ERR_CAMERA_START_FAIL
    -1301
    Failed to enable the camera. For example, if there is an exception for the camera's configuration program (driver) on a Windows or Mac device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_MIC_START_FAIL
    -1302
    Failed to open the mic. For example, if there is an exception for the camera's configuration program (driver) on a Windows or Mac device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_CAMERA_NOT_AUTHORIZED
    -1314
    The device of camera is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_MIC_NOT_AUTHORIZED
    -1317
    The device of mic is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_CAMERA_OCCUPY
    -1316
    The camera is occupied. Try a different camera.
    ERR_MIC_OCCUPY
    -1319
    The mic is occupied. This occurs when, for example, the user is currently having a call on the mobile device.

    Issues with the remote mirror mode not functioning properly

    In TRTC, video mirror settings are divided into local preview mirror setLocalRenderParams and video encoding mirror setVideoEncoderMirror. These settings separately affect the mirror effect of the local preview and the video encoding output (the mirror mode for remote viewers and cloud recordings). If you expect the mirror effect seen in the local preview to also take effect on the remote viewer's end, follow these encoding procedures.
    // Set the rendering parameters for the local video
    TRTCCloudDef.TRTCRenderParams params = new TRTCCloudDef.TRTCRenderParams();
    params.mirrorType = TRTCCloudDef.TRTC_VIDEO_MIRROR_TYPE_ENABLE; // Video mirror mode
    params.fillMode = TRTCCloudDef.TRTC_VIDEO_RENDER_MODE_FILL; // Video fill mode
    params.rotation = TRTCCloudDef.TRTC_VIDEO_ROTATION_0; // Video rotation angle
    mTRTCCloud.setLocalRenderParams(params);
    
    // Set the video mirror mode for the encoder output
    mTRTCCloud.setVideoEncoderMirror(true);

    Issues with camera scale, focus, and switch

    In e-commerce live streaming scenarios, the anchor may need to custom adjust the camera settings. The TRTC SDK's device management class provides APIs for these needs.
    1. Query and set the zoom factor for the camera.
    // Get the maximum zoom factor for the camera (only for mobile devices)
    float zoomRatio = mTRTCCloud.getDeviceManager().getCameraZoomMaxRatio();
    // Set the zoom factor for the camera (only for mobile devices)
    // Value range is 1-5. 1 means the furthest field of view (normal lens), and 5 means the closest field of view (zoom-in lens). The maximum recommended value is 5. Exceeding this may result in blurry video.
    mTRTCCloud.getDeviceManager().setCameraZoomRatio(zoomRatio);
    2. Set the focus feature and position of the camera.
    // Enable or disable the camera's autofocus feature (only for mobile devices)
    mTRTCCloud.getDeviceManager().enableCameraAutoFocus(false);
    // Set the focus position of the camera (only for mobile devices)
    // The precondition for using this API is to first disable the autofocus feature using enableCameraAutoFocus
    mTRTCCloud.getDeviceManager().setCameraFocusPosition(int x, int y);
    3. Determine and switch to front or rear cameras.
    // Determine if the current camera is the front camera (only for mobile devices)
    boolean isFrontCamera = mTRTCCloud.getDeviceManager().isFrontCamera();
    // Switch to front or rear cameras (only for mobile devices)
    // Passing true means switching to front, and passing false means switching to rear
    mTRTCCloud.getDeviceManager().switchCamera(!isFrontCamera);
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support