tencent cloud

Feedback

Last updated: 2024-07-18 14:27:33

    Business Process

    This section summarizes some common business processes in live showroom, helping you better understand the implementation process of the entire scenario.
    Anchor starts and ends the live streaming.
    The anchor initiates the cross-room competition.
    The RTC audience enters the room for mic-connection.
    The CDN audience enters the room for mic-connection.
    The following figure shows the process of an anchor (room owner) local preview, creating a room, entering the room to start the live streaming, and leaving the room to end the live streaming.
    
    
    
    The following figure shows the process of Anchor A inviting Anchor B for a cross-room competition. During the cross-room competition, the audiences in both rooms can see the PK mic-connection live streaming of the two room owners.
    
    
    
    The following figure shows the process for RTC live interactive streaming audience to enter the room, apply for the mic-connection, end the mic-connection, and exit the room.
    
    
    
    The following figure shows the process for RTC CDN live streaming audience to enter the room, apply for the mic-connection, end the mic-connection, and exit the room.
    
    
    

    Integration Preparation

    Step 1. Activating the service.

    Live showroom scenarios usually require two paid PaaS services from Tencent Cloud Tencent Real-Time Communication (TRTC) and Tencent Effect for construction. TRTC is responsible for providing real-time audio and video interaction capabilities. Tencent Effect is responsible for providing beauty effects capabilities. If you use a third-party beauty effect product, you can disregard the Tencent Effect integration part.
    Activate TRTC service.
    Activate Tencent Effect service.
    1. First, you need to log in to the Tencent Real-Time Communication (TRTC) console to create an application. You can choose to upgrade the TRTC application version according to your needs. For example, the professional edition unlocks more value-added feature services.
    
    
    
    Note:
    It is recommended to create two applications for testing and production environments, respectively. Each Tencent Cloud account (UIN) is given 10,000 minutes of free duration every month for one year.
    TRTC offers monthly subscription plans including the experience edition (default), basic edition, and professional edition. Different value-added feature services can be unlocked. For details, see Version Features and Monthly Subscription Plan Instructions.
    2. After an application is created, you can see the basic information of the application in the Application Management - Application Overview section. It is important to keep the SDKAppID and SDKSecretKey safe for later use and to avoid key leakage that could lead to traffic theft.
    
    
    
    1. Log in to Tencent Cloud Tencent Effect console > Mobile License. Click Create Trial License (the free trial validity period for Trial Version License is 14 days. It is extendable once for a total of 28 days). Fill in the actual requirements for App Name, Package Name and Bundle ID. Choose Tencent Effect, and choose the capabilities to be tested: Advanced Package S1-07, Atomic Capability X1-01, Atomic Capability X1-02, and Atomic Capability X1-03. After you check it, accurately fill in the company name, and industry type. Upload company service license, click OK to submit the review application, and wait for the manual review process.
    
    
    
    2. After the trial version License is successfully created, the page will display the generated License information. At this time, the License URL and License Key parameters are not yet effective and will only become active after the submission is approved. When configuring SDK initialization, you need to input both the License URL and License Key parameters. Keep the following information secure.
    
    
    

    Step 2: Importing SDK.

    TRTC SDK and Tencent Effect SDK have been released to the CocoaPods repository. You can integrate them via CocoaPods.
    1. Install CocoaPods.
    Enter the following command in a terminal window (you need to install Ruby on your Mac first):
    sudo gem install cocoapods
    2. Create a Podfile file.
    Go to the project directory, and enter the following command. A Podfile file will then be created in the project directory.
    pod init
    3. Edit the Podfile file.
    Choose an appropriate version for your project and edit the Podfile file:
    platform :ios, '8.0'
    target 'App' do
    # TRTC Lite Edition
    # The installation package has the minimum incremental size. But it only supports two features of Real-Time Communication (TRTC) and TXLivePlayer for live streaming playback.
    pod 'TXLiteAVSDK_TRTC', :podspec => 'https://liteav.sdk.qcloud.com/pod/liteavsdkspec/TXLiteAVSDK_TRTC.podspec'
    # Pro Edition
    # Includes a wide range of features such as Real-Time Communication (TRTC), TXLivePlayer for live streaming playback, TXLivePusher for RTMP push streams, TXVodPlayer for on-demand playback, and UGSV for short video recording and editing.
    # pod 'TXLiteAVSDK_Professional', :podspec => 'https://liteav.sdk.qcloud.com/pod/liteavsdkspec/TXLiteAVSDK_Professional.podspec'
    # Tencent Effect SDK example of S1-07 package is as follows:
    pod 'TencentEffect_S1-07'
    
    end
    4. Update and install the SDK.
    Enter the following command in a terminal window to update the local repository files and install the SDK:
    pod install
    Or run this command to update the local repository:
    pod update
    Upon the completion of pod command execution, an .xcworkspace project file integrated with the SDK will be generated. Double-click to open it.
    Note:
    If the pod search fails, it is recommended to try updating the pod's local repo cache. Update command is as follows:
    pod setup
    pod repo update
    rm ~/Library/Caches/CocoaPods/search_index.json
    Besides the recommended automatic loading method, you can also choose to download the SDK and manually import it. For details, see Manually Integrating the TRTC SDK and Manually Integrating Tencent Effect SDK.
    5. Add beauty resources to the actual project.
    Download and unzip the corresponding package of SDK and Beauty Resources. Add the bundle resources under the resources/motionRes folder to the actual project.
    On the Build Settings, under Other Linker Flags, add -ObjC.
    6. Modify the Bundle Identifier to match the applied trial authorization.

    Step 3: Project configuration.

    1. Configure permissions.
    For live showroom scenarios, TRTC SDK and Tencent Effect SDK require the following permissions. Add the following two items to the App's Info.plist, corresponding to the microphone and camera prompts in the system pop-up authorization dialog box.
    Privacy - Microphone Usage Description. Enter a prompt specifying the purpose of microphone use.
    Privacy - Camera Usage Description. Enter a prompt specifying the purpose of camera use.
    
    
    
    2. If you need your App to continue running certain features in the background, go to XCode. Choose your current project. Under Capabilities, set the settings for Background Modes to ON, and check Audio, AirPlay, and Picture in Picture, as shown below:
    
    
    

    Step 4: Authentication and authorization.

    TRTC authentication credential.
    Tencent Effect authentication license.
    UserSig is a security protection signature designed by Tencent Cloud to prevent malicious attackers from misappropriating your cloud service usage rights. TRTC validates this authentication credential when it enters the room.
    Debugging Stage: UserSig can be generated through two methods for debugging and testing purposes only: client sample code and console access.
    Formal Operation Stage: It is recommended to use a higher security level server computation for generating UserSig. This is to prevent key leakage due to client reverse engineering.
    The specific implementation process is as follows:
    1. Before calling the SDK's initialization function, your app must first request UserSig from your server.
    2. Your server computes the UserSig based on the SDKAppID and UserID.
    3. The server returns the computed UserSig to your app.
    4. Your app passes the obtained UserSig into the SDK through a specific API.
    5. The SDK submits the SDKAppID + UserID + UserSig to Tencent Cloud CVM for verification.
    6. Tencent Cloud verifies the UserSig and confirms its validity.
    7. After the verification is passed, real-time audio and video services will be provided to the TRTC SDK.
    
    
    
    Note:
    The local computation method of UserSig during the debugging stage is not recommended for application in an online environment. It is prone to reverse engineering, leading to key leakage.
    We provide server computation source code for UserSig in multiple programming languages (Java/GO/PHP/Nodejs/Python/C#/C++). For details, see Server Computation of UserSig.
    Before using Tencent Effect, you need to verify the license credential with Tencent Cloud. Configuring the License requires License Key and License Url. Sample code is as follows.
    [TELicenseCheck setTELicense:LicenseURL key:LicenseKey completion:^(NSInteger authresult, NSString * _Nonnull errorMsg) {
    if (authresult == TELicenseCheckOk) {
    NSLog(@"Authentication successful.");
    } else {
    NSLog(@"Authentication failed.");
    }
    }];
    Note:
    It is recommended to trigger the authentication permission in the initialization code of related business modules. Ensure to avoid having to download the License temporarily before use. Additionally, during authentication, network permissions must be ensured.
    The actual application's Bundle ID must match exactly with the Bundle ID associated with the creation of License. Otherwise, it will lead to License verification failure. For details, see Authentication Error Code.

    Step 5: Initializing the SDK.

    Initialize the TRTC SDK.
    Initialize the Tencent Effect SDK.
    // Create TRTC SDK instance (Single Instance Pattern).
    self.trtcCloud = [TRTCCloud sharedInstance];
    // Set event listeners.
    self.trtcCloud.delegate = self;
    
    // Notifications from various SDK events (e.g., error codes, warning codes, audio and video status parameters, etc.).
    - (void)onError:(TXLiteAVError)errCode errMsg:(nullable NSString *)errMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", errCode, errMsg);
    }
    
    - (void)onWarning:(TXLiteAVWarning)warningCode warningMsg:(nullable NSString *)warningMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", warningCode, warningMsg);
    }
    
    // Remove event listener.
    self.trtcCloud.delegate = nil;
    // Terminate TRTC SDK instance (Singleton Pattern).
    [TRTCCloud destroySharedIntance];
    Note:
    It is recommended to listen to SDK event notifications. Perform log printing and handling for some common errors. For details, see Error Code Table.
    // Load beauty-related resources.
    NSDictionary *assetsDict = @{@"core_name":@"LightCore.bundle",
    @"root_path":[[NSBundle mainBundle] bundlePath]
    };
    
    // Initialize the Tencent Effect SDK.
    self.beautyKit = [[XMagic alloc] initWithRenderSize:previewSize assetsDict:assetsDict];
    
    // Release the Tencent Effect SDK.
    [self.beautyKit deinit]
    Note:
    Before initializing the Tencent Effect SDK, resource copying and other preparatory work are needed. For detailed steps, see Tencent Effect SDK integration steps.

    Integration Process

    API sequence diagram.

    
    
    

    Step 1: The anchor enters the room to push streams.

    1. The anchor activates local video preview and audio capture before entering the room.
    // Obtain the video rendering control for displaying the anchor's local video preview.
    @property (nonatomic, strong) UIView *anchorPreviewView;
    @property (nonatomic, strong) TRTCCloud *trtcCloud;
    
    - (void)setupTRTC {
    self.trtcCloud = [TRTCCloud sharedInstance];
    self.trtcCloud.delegate = self;
        // Set video encoding parameters to determine the picture quality seen by remote users.
        TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
        encParam.videoResolution = TRTCVideoResolution_960_540;
        encParam.videoFps = 15;
        encParam.videoBitrate = 1300;
        encParam.resMode = TRTCVideoResolutionModePortrait;
        [self.trtcCloud setVideoEncoderParam:encParam];
        
        // isFrontCamera can specify using the front/rear camera for video capture.
        [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.anchorPreviewView];
    
        // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC.
        [self.trtcCloud startLocalAudio:TRTCAudioQualityDefault];
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    Call the above API before enterRoom. The SDK will only start the camera preview and audio capture, and wait until you call enterRoom to start streaming.
    Call the above API after enterRoom. The SDK will start the camera preview and audio capture and automatically start streaming.
    2. The anchor sets rendering parameters for the local video, and the encoder output video mode (optional).
    - (void)setupRenderParams {
        TRTCRenderParams *params = [[TRTCRenderParams alloc] init];
        // Video mirror mode
        params.mirrorType = TRTCVideoMirrorTypeAuto;
        // Video fill mode
        params.fillMode = TRTCVideoFillMode_Fill;
        // Video rotation angle
        params.rotation = TRTCVideoRotation_0;
        // Set the rendering parameters for the local video.
        [self.trtcCloud setLocalRenderParams:params];
    
        // Set the video mirror mode for the encoder output.
        [self.trtcCloud setVideoEncoderMirror:YES];
        // Set the rotation of the video encoder output.
        [self.trtcCloud setVideoEncoderRotation:TRTCVideoRotation_0];
    }
    Note:
    Setting local video rendering parameters only affects the rendering effect of the local video.
    Setting encoder output mode affects the viewing effect for other users in the room (and the cloud recording files).
    3. The anchor starts the live streaming, entering the room and start streaming.
    - (void)enterRoomByAnchorWithUserId:(NSString *)userId roomId:(NSString *)roomId {
        TRTCParams *params = [[TRTCParams alloc] init];
        // Take the room ID string as an example.
        params.strRoomId = roomId;
        params.userId = userId;
        // UserSig obtained from the business backend.
        params.userSig = @"userSig";
        // Replace with your SDKAppID.
        params.sdkAppId = 0;
        // Specify the anchor role.
        params.role = TRTCRoleAnchor;
        // Enter the room in an interactive live streaming scenario.
        [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
        if (result > 0) {
            // result indicates the time taken (in milliseconds) to join the room.
            NSLog(@"Enter room succeed!");
        } else {
            // result indicates the error code when you fail to enter the room.
            NSLog(@"Enter room failed!");
        }
    }
    Note:
    TRTC room IDs are divided into integer type roomId and string type strRoomId. The rooms of these two types are not interconnected. It is recommended to unify the room ID type.
    TRTC user roles are divided into anchors and audiences. Only anchors have streaming permissions. It is necessary to specify the user role when entering the room. If not specified, the default will be the anchor role.
    In live showroom scenarios, it is recommended to choose TRTCAppSceneLIVE as the room entry mode.

    Step 2: The audience enters the room to pull streams.

    1. Audience enters the TRTC room.
    - (void)enterRoomByAudienceWithUserId:(NSString *)userId roomId:(NSString *)roomId {
        TRTCParams *params = [[TRTCParams alloc] init];
        // Take the room ID string as an example.
        params.strRoomId = roomId;
        params.userId = userId;
        // UserSig obtained from the business backend.
        params.userSig = @"userSig";
        // Replace with your SDKAppID.
        params.sdkAppId = 0;
        // Specify the audience role.
        params.role = TRTCRoleAudience;
        // Enter the room in an interactive live streaming scenario.
        [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
        if (result > 0) {
            // result indicates the time taken (in milliseconds) to join the room.
            NSLog(@"Enter room succeed!");
        } else {
            // result indicates the error code when you fail to enter the room.
            NSLog(@"Enter room failed!");
        }
    }
    2. Audience subscribes to the anchor's audio and video streams.
    - (void)onUserAudioAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes their audio.
        // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio.
    }
    
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes the primary video.
        if (available) {
            // Subscribe to the remote user's video stream and bind the video rendering control.
            [self.trtcCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:self.remoteView];
        } else {
            // Unsubscribe to the remote user's video stream and release the rendering control.
            [self.trtcCloud stopRemoteView:userId streamType:TRTCVideoStreamTypeBig];
        }
    }
    3. Audience sets the rendering mode for the remote video (optional).
    - (void)setupRemoteRenderParams {
        TRTCRenderParams *params = [[TRTCRenderParams alloc] init];
        // Video mirror mode
        params.mirrorType = TRTCVideoMirrorTypeAuto;
        // Video fill mode
        params.fillMode = TRTCVideoFillMode_Fill;
        // Video rotation angle
        params.rotation = TRTCVideoRotation_0;
        // Set the rendering mode for the remote video.
        [self.trtcCloud setRemoteRenderParams:@"userId" streamType:TRTCVideoStreamTypeBig params:params];
    }

    Step 3: The audience interacts via mic.

    1. The audience is switched to the anchor role.
    - (void)switchToAnchor {
        // Switched to the anchor role.
        [self.trtcCloud switchRole:TRTCRoleAnchor];
    }
    
    // Event callback for switching the role.
    - (void)onSwitchRole:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
        if (errCode == ERR_NULL) {
            // Role switched successfully.
        }
    }
    2. Audience start local audio and video capture and streaming.
    - (void)setupTRTC {
        // Set video encoding parameters to determine the picture quality seen by remote users.
        TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
        encParam.videoResolution = TRTCVideoResolution_480_270;
        encParam.videoFps = 15;
        encParam.videoBitrate = 550;
        encParam.resMode = TRTCVideoResolutionModePortrait;
        [self.trtcCloud setVideoEncoderParam:encParam];
     
        // isFrontCamera can specify using the front/rear camera for video capture.
        [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.audiencePreviewView];
        // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC.
        [self.trtcCloud startLocalAudio:TRTCAudioQualityDefault];
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    3. The audience leaves the seat and stops streaming.
    - (void)switchToAudience {
        // Switched to the audience role.
        [self.trtcCloud switchRole:TRTCRoleAudience];
    }
    
    // Event callback for switching the role.
    - (void)onSwitchRole:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
        if (errCode == ERR_NULL) {
            // Stop camera capture and streaming.
            [self.trtcCloud stopLocalPreview];
            // Stop microphone capture and streaming.
            [self.trtcCloud stopLocalAudio];
        }
    }

    Step 4: Exiting and dissolving the room.

    1. Exit the room.
    - (void)exitRoom {
        [self.trtcCloud stopLocalAudio];
        [self.trtcCloud stopLocalPreview];
        [self.trtcCloud exitRoom];
    }
    
    // Event callback for exiting the room.
    - (void)onExitRoom:(NSInteger)reason {
        if (reason == 0) {
            NSLog(@"Proactively call exitRoom to exit the room.");
        } else if (reason == 1) {
            NSLog(@"Removed from the current room by the server.");
        } else if (reason == 2) {
            NSLog(@"The current room is dissolved.");
        }
    }
    Note:
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.
    If you want to call enterRoom again or switch to another audio/video SDK, wait for the onExitRoom callback before proceeding. Otherwise, you may encounter exceptions such as the camera or microphone being forcefully occupied.
    2. Dissolve the room.
    Server Dissolvement: TRTC provides the Server dissolves the room API DismissRoom (differentiating between numeric room ID and string room ID). You can call this API to remove all users from the room and dissolve the room.
    Client Dissolvement: Through the exitRoom API of each client, all the anchors and audiences in the room can be completed of room exit. After room exit, according to TRTC room lifecycle rules, the room will automatically be dissolved. For details, see Exit Room.
    Warning:
    It is recommended that after the end of live streaming, you call the room dissolvement API on the server to ensure the room is dissolved. This will prevent audiences from accidentally entering the room and incurring unexpected charges.

    Alternative solutions

    API sequence diagram.

    
    
    

    Step 1: The anchor relays the streams to CDN.

    1. Related configurations for relaying to live streaming CDN.
    Global automatic relayed push
    If you need to automatically relay all anchors' audio and video streams in the room to live streaming CDN, you need to enable Relay to CDN in the TRTC console Advanced Features page.
    
    
    
    Relayed push of the specified streams
    If you need to manually specify the audio and video streams to be published to live streaming CDN, or publish the mixed audio and video streams to live streaming CDN, you can do so by calling the startPublishMediaStream API. In this case, you do not need to activate global automatically relaying to CDN in the console. For detailed introduction, see Publish Audio and Video Streams to Live Streaming CDN.
    2. The anchor activates local video preview and audio capture before entering the room.
    // Obtain the video rendering control for displaying the anchor's local video preview.
    @property (nonatomic, strong) UIView *anchorPreviewView;
    
    
    - (void)setupTRTC {
        self.trtcCloud = [TRTCCloud sharedInstance];
        self.trtcCloud.delegate = self;
        // Set video encoding parameters to determine the picture quality seen by remote users.
        TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
        encParam.videoResolution = TRTCVideoResolution_960_540;
        encParam.videoFps = 15;
        encParam.videoBitrate = 1300;
        encParam.resMode = TRTCVideoResolutionModePortrait;
        [self.trtcCloud setVideoEncoderParam:encParam];
    
        // isFrontCamera can specify using the front/rear camera for video capture.
        [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.anchorPreviewView];
        
        // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC.
        [self.trtcCloud startLocalAudio:TRTCAudioQualityDefault];
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    Call the above API before enterRoom. The SDK will only start the camera preview and audio capture, and wait until you call enterRoom to start streaming.
    Call the above API after enterRoom. The SDK will start the camera preview and audio capture and automatically start streaming.
    3. The anchor sets rendering parameters for the local screen, and the encoder output video mode.
    - (void)setupRenderParams {
        TRTCRenderParams *params = [[TRTCRenderParams alloc] init];
        // Video mirror mode
        params.mirrorType = TRTCVideoMirrorTypeAuto;
        // Video fill mode
        params.fillMode = TRTCVideoFillMode_Fill;
        // Video rotation angle
        params.rotation = TRTCVideoRotation_0;
        // Set the rendering parameters for the local video.
        [self.trtcCloud setLocalRenderParams:params];
        // Set the video mirror mode for the encoder output.
        [self.trtcCloud setVideoEncoderMirror:YES];
        // Set the rotation of the video encoder output.
        [self.trtcCloud setVideoEncoderRotation:TRTCVideoRotation_0];
    }
    Note:
    Setting local video rendering parameters only affects the rendering effect of the local video.
    Setting encoder output mode affects the viewing effect for other users in the room (and the cloud recording files).
    4. The anchor starts the live streaming, entering the room and start streaming.
    - (void)enterRoomByAnchorWithUserId:(NSString *)userId roomId:(NSString *)roomId {
        TRTCParams *params = [[TRTCParams alloc] init];
        // Take the room ID string as an example.
        params.strRoomId = roomId;
        params.userId = userId;
        // UserSig obtained from the business backend.
        params.userSig = @"userSig";
        // Replace with your SDKAppID.
        params.sdkAppId = 0;
        // Specify the anchor role.
        params.role = TRTCRoleAnchor;
        // Enter the room in an interactive live streaming scenario.
        [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
        if (result > 0) {
            // result indicates the time taken (in milliseconds) to join the room.
            NSLog(@"Enter room succeed!");
        } else {
            // result indicates the error code when you fail to enter the room.
            NSLog(@"Enter room failed!");
        }
    }
    Note:
    TRTC room IDs are divided into integer type roomId and string type strRoomId. The rooms of these two types are not interconnected. It is recommended to unify the room ID type.
    TRTC user roles are divided into anchors and audiences. Only anchors have streaming permissions. It is necessary to specify the user role when entering the room. If not specified, the default will be the anchor role.
    In live showroom scenarios, it is recommended to choose TRTCAppSceneLIVE as the room entry mode.
    5. The anchor relays the audio and video streams to the live streaming CDN.
    - (void)startPublishMediaToCDN:(NSString *)streamName {
        NSDate *date = [NSDate dateWithTimeIntervalSinceNow:0];
        // Set the expiration time for the push URLs.
        NSTimeInterval time = [date timeIntervalSince1970] + (24 * 60 * 60);
        // Generate authentication information. The getSafeUrl method can be obtained in the CSS console - Domain Name Management - Push Configuration - Sample Code for Push URLs.
        NSString *secretParam = [self getSafeUrl:LIVE_URL_KEY streamName:streamName time:time];
        
        // The target URLs for media stream publication.
        TRTCPublishTarget* target = [[TRTCPublishTarget alloc] init];
        // The target URLs are set for relaying to CDN.
        target.mode = TRTCPublishBigStreamToCdn;
        TRTCPublishCdnUrl* cdnUrl = [[TRTCPublishCdnUrl alloc] init];
        // Construct push URLs (in RTMP format) to the live streaming service provider.
        cdnUrl.rtmpUrl = [NSString stringWithFormat:@"rtmp://%@/live/%@?%@", PUSH_DOMAIN, streamName, secretParam];
        // True means Tencent CSS push URLs, and false means third-party services.
        cdnUrl.isInternalLine = YES;
        NSMutableArray* cdnUrlList = [NSMutableArray array];
        // Multiple CDN push URLs can be added.
        [cdnUrlList addObject:cdnUrl];
        target.cdnUrlList = cdnUrlList;
        
        // Set media stream encoding output parameters (can be defined according to business needs).
        TRTCStreamEncoderParam* encoderParam = [[TRTCStreamEncoderParam alloc] init];
        encoderParam.audioEncodedSampleRate = 48000;
        encoderParam.audioEncodedChannelNum = 1;
        encoderParam.audioEncodedKbps = 50;
        encoderParam.audioEncodedCodecType = 0;
        encoderParam.videoEncodedWidth = 540;
        encoderParam.videoEncodedHeight = 960;
        encoderParam.videoEncodedFPS = 15;
        encoderParam.videoEncodedGOP = 2;
        encoderParam.videoEncodedKbps = 1300;
    
        // Start publishing media streams.
        [self.trtcCloud startPublishMediaStream:target encoderParam:encoderParam mixingConfig:nil];
    }
    Note:
    During single-anchor live streaming, only initiate the relayed push task. When there is an audience co-broadcasting or anchor PK, update this task to a mixed-stream transcoding task.
    Information of push authentication KEY LIVE_URL_KEY and push domain name PUSH_DOMAIN are required to obtain in the CSS console Domain Management page.
    After the media stream is published, SDK will provide the backend-initiated task identifier (taskId) through the callback onStartPublishMediaStream.
    - (void)onStartPublishMediaStream:(NSString *)taskId code:(int)code message:(NSString *)message extraInfo:(NSDictionary *)extraInfo {
        // taskId: When the request is successful, TRTC backend will provide the taskId of this task in the callback. You can later use this taskId with updatePublishMediaStream and stopPublishMediaStream to update and stop.
        // code: Callback result. 0 means success and other values mean failure.
    }

    Step 2: The audience pulls streams for playback.

    CDN audiences do not need to enter the TRTC room. They can directly pull the anchor's CDN stream for playback.
    // Initialize the player.
    self.livePlayer = [[V2TXLivePlayer alloc] init];
    // Set the player callback listener.
    [self.livePlayer setObserver:self];
    // Set the video rendering control for the player.
    [self.livePlayer setRenderView:self.remoteView];
    // Set delay management mode (optional).
    [self.livePlayer setCacheParams:1.f maxTime:5.f]; // Auto mode
    [self.livePlayer setCacheParams:1.f maxTime:1.f]; // Speed mode
    [self.livePlayer setCacheParams:5.f maxTime:5.f]; // Smooth mode
    
    // Concatenate the pull URLs for playback.
    NSString *flvUrl = [NSString stringWithFormat:@"http://%@/live/%@.flv", PLAY_DOMAIN, streamName];
    NSString *hlsUrl = [NSString stringWithFormat:@"http://%@/live/%@.m3u8", PLAY_DOMAIN, streamName];
    NSString *rtmpUrl = [NSString stringWithFormat:@"rtmp://%@/live/%@", PLAY_DOMAIN, streamName];
    NSString *webrtcUrl = [NSString stringWithFormat:@"webrtc://%@/live/%@", PLAY_DOMAIN, streamName];
    
    // Start playing.
    [self.livePlayer startLivePlay:flvUrl];
    
    // Custom set fill mode (optional).
    [self.livePlayer setRenderFillMode:V2TXLiveFillModeFit];
    // Custom video rendering direction (optional).
    [self.livePlayer setRenderRotation:V2TXLiveRotation0];
    Note:
    The playback domain name PLAY_DOMAIN requires you to Add Your Own Domain in the CSS console for live streaming playback. You also should configure domain CNAME.
    The live streaming feature requires setting the License before success in playback. Otherwise, playback will fail (black screen). It needs to be set globally only once. If you have not obtained the License, you can freely apply for a Trial Version License for normal playback. The Official Version License requires purchase.
    [TXLiveBase setLicenceURL:LICENSEURL key:LICENSEURLKEY];

    Step 3: The audience interacts via mic.

    1. Viewers who want to co-broadcasting need to enter the TRTC room for real-time interaction with the anchor.
    // Enter the TRTC room and start streaming.
    - (void)enterRoomWithUserId:(NSString *)userId roomId:(NSString *)roomId {
        TRTCParams *params = [[TRTCParams alloc] init];
        // Take the room ID string as an example.
        params.strRoomId = roomId;
        params.userId = userId;
        // UserSig obtained from the business backend.
        params.userSig = @"userSig";
        // Replace with your SDKAppID.
        params.sdkAppId = 0;
        // Specify the anchor role.
        params.role = TRTCRoleAnchor;
    // Enable local audio and video capture.
    [self startLocalMedia];
        // Enter the room in an interactive live streaming scenario.
        [self.trtcCloud enterRoom:params appScene:TRTCAppSceneLIVE];
    }
    
    // Enable local video preview and audio capture.
    - (void)startLocalMedia {
        // Set video encoding parameters to determine the picture quality seen by remote users.
        TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
        encParam.videoResolution = TRTCVideoResolution_480_270;
        encParam.videoFps = 15;
        encParam.videoBitrate = 550;
        encParam.resMode = TRTCVideoResolutionModePortrait;
        [self.trtcCloud setVideoEncoderParam:encParam];
        
        // isFrontCamera can specify using the front/rear camera for video capture.
        [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.audiencePreviewView];
        // Here you can specify the audio quality, from low to high as SPEECH/DEFAULT/MUSIC.
        [self.trtcCloud startLocalAudio:TRTCAudioQualityDefault];
    }
    
    // Event callback for the result of entering the room.
    - (void)onEnterRoom:(NSInteger)result {
        if (result > 0) {
            // result indicates the time taken (in milliseconds) to join the room.
            NSLog(@"Enter room succeed!");
        } else {
            // result indicates the error code when you fail to enter the room.
            NSLog(@"Enter room failed!");
        }
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    2. The mic-connection audience start subscribing to the anchor's audio and video streams after they successfully enter the room.
    - (void)onUserAudioAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes their audio.
        // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio.
    }
    
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes the primary video.
        if (available) {
            // Subscribe to the remote user's video stream and bind the video rendering control.
            [self.trtcCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:self.remoteView];
        } else {
            // Unsubscribe to the remote user's video stream and release the rendering control.
            [self.trtcCloud stopRemoteView:userId streamType:TRTCVideoStreamTypeBig];
        }
    }
    
    - (void)onFirstVideoFrame:(NSString *)userId streamType:(TRTCVideoStreamType)streamType width:(int)width height:(int)height {
        // The SDK starts rendering the first frame of the local or remote user's video.
        if (![userId isEqualToString:@""]) {
            // Stop playing the CDN stream upon receiving the first frame of the anchor's video.
            [self.livePlayer stopPlay];
        }
    }
    Note:
    TRTC stream pulling startRemoteView can directly reuse the video rendering control previously used by the CDN stream pullingsetRenderView.
    To avoid video interruptions when switching between stream pullers, it is recommended to wait until the TRTC first frame callback onFirstVideoFrame is received before stopping the CDN stream pulling.
    3. The anchor updates the publication of mixed media streams.
    // Event callback for the mic-connection audience's room entry.
    - (void)onRemoteUserEnterRoom:(NSString *)userId {
        if (![self.mixUserList containsObject:userId]) {
            [self.mixUserList addObject:userId];
        }
        [self updatePublishMediaToCDN];
    }
    
    // Update the publication of mixed media streams to the live streaming CDN.
    - (void)updatePublishMediaToCDN {
        NSDate *date = [NSDate dateWithTimeIntervalSinceNow:0];
        // Set the expiration time for the push URLs.
        NSTimeInterval time = [date timeIntervalSince1970] + (24 * 60 * 60);
        // Generate authentication information. The getSafeUrl method can be obtained in the CSS console - Domain Name Management - Push Configuration - Sample Code for Push URLs.
        NSString *secretParam = [self getSafeUrl:LIVE_URL_KEY streamName:self.streamName time:time];
        
        // The target URLs for media stream publication.
        TRTCPublishTarget* target = [[TRTCPublishTarget alloc] init];
        // The target URLs are set for relaying the mixed streams to CDN.
        target.mode = TRTCPublishMixStreamToCdn;
        TRTCPublishCdnUrl* cdnUrl = [[TRTCPublishCdnUrl alloc] init];
        // Construct push URLs (in RTMP format) to the live streaming service provider.
        cdnUrl.rtmpUrl = [NSString stringWithFormat:@"rtmp://%@/live/%@?%@", PUSH_DOMAIN, self.streamName, secretParam];
        // True means Tencent CSS push URLs, and false means third-party services.
        cdnUrl.isInternalLine = YES;
        NSMutableArray* cdnUrlList = [NSMutableArray array];
        // Multiple CDN push URLs can be added.
        [cdnUrlList addObject:cdnUrl];
        target.cdnUrlList = cdnUrlList;
        
        // Set media stream encoding output parameters.
        TRTCStreamEncoderParam* encoderParam = [[TRTCStreamEncoderParam alloc] init];
        encoderParam.audioEncodedSampleRate = 48000;
        encoderParam.audioEncodedChannelNum = 1;
        encoderParam.audioEncodedKbps = 50;
        encoderParam.audioEncodedCodecType = 0;
        encoderParam.videoEncodedWidth = 540;
        encoderParam.videoEncodedHeight = 960;
        encoderParam.videoEncodedFPS = 15;
        encoderParam.videoEncodedGOP = 2;
        encoderParam.videoEncodedKbps = 1300;
        
        TRTCStreamMixingConfig *config = [[TRTCStreamMixingConfig alloc] init];
        if (self.mixUserList.count) {
            NSMutableArray<TRTCUser *> *userList = [NSMutableArray array];
            NSMutableArray<TRTCVideoLayout *> *layoutList = [NSMutableArray array];
            for (int i = 1; i < MIN(self.mixUserList.count, 16); i++) {
                TRTCUser *user = [[TRTCUser alloc] init];
                // The integer room number is intRoomId.
                user.strRoomId = self.roomId;
                user.userId = self.mixUserList[i];
                [userList addObject:user];
                TRTCVideoLayout *layout = [[TRTCVideoLayout alloc] init];
                if ([self.mixUserList[i] isEqualToString:self.userId]) {
                    // The layout for the anchor's video.
                    layout.rect = CGRectMake(0, 0, 540, 960);
                    layout.zOrder = 0;
                } else {
                    // The layout for the mic-connection audience's video.
                    layout.rect = CGRectMake(400, 5 + i * 245, 135, 240);
                    layout.zOrder = 1;
                }
                layout.fixedVideoUser = user;
                layout.fixedVideoStreamType = TRTCVideoStreamTypeBig;
                [layoutList addObject:layout];
            }
            // Specify the information for each input audio stream in the transcoding stream.
            config.audioMixUserList = [userList copy];
            // Specify the information of position, size, layer, and stream type for each video screen in the mixed display.
            config.videoLayoutList = [layoutList copy];
        }
        // Update the published media stream.
        [self.trtcCloud updatePublishMediaStream:self.taskId publishTarget:target encoderParam:encoderParam mixingConfig:config];
    }
    
    // Event callback for updating the media stream.
    - (void)onUpdatePublishMediaStream:(NSString *)taskId code:(int)code message:(NSString *)message extraInfo:(NSDictionary *)extraInfo {
        // When you call the publish media stream API (updatePublishMediaStream), the taskId you provide will be returned to you through this callback. It is used to identify which update request the callback belongs to.
        // code: Callback result. 0 means success and other values mean failure.
    }
    Note:
    To ensure continuous CDN playback without stream disconnection, you need to keep the media stream encoding output parameter encoderParam and the stream name streamName unchanged.
    Media stream encoding output parameters and mixed display layout parameters can be customized according to business needs. Currently, up to 16 channels of audio and video input are supported. If a user only provides audio, it will still be counted as one channel.
    Switching between audio only, audio and video, and video only is not supported within the same task.
    4. The off-streaming audience exit the room, and the anchor updates the mixed stream task.
    
    // Set the player callback listener.
    [self.livePlayer setObserver:self];
    // The reusable TRTC video rendering control.
    [self.livePlayer setRenderView:self.remoteView];
    // Restart playing CDN media stream.
    [self.livePlayer startLivePlay:flvUrl];
    
    
    - (void)onVideoLoading:(id<V2TXLivePlayer>)player extraInfo:(NSDictionary *)extraInfo {
        // Video loading event.
    }
    
    // Video playback event.
    - (void)onVideoPlaying:(id<V2TXLivePlayer>)player firstPlay:(BOOL)firstPlay extraInfo:(NSDictionary *)extraInfo {
        if (firstPlay) {
            [self.trtcCloud stopAllRemoteView];
            [self.trtcCloud stopLocalAudio];
            [self.trtcCloud stopLocalPreview];
            [self.trtcCloud exitRoom];
        }
    }
    Note:
    To avoid video interruptions when switching the stream puller, it is recommended to wait for the player's video playback event onVideoPlaying before exiting the TRTC room.
    // Event callback for the mic-connection audience's room exit.
    - (void)onRemoteUserLeaveRoom:(NSString *)userId reason:(NSInteger)reason {
        if ([self.mixUserList containsObject:userId]) {
            [self.mixUserList removeObject:userId];
        }
        // The anchor updates the mixed stream task.
        [self updatePublishMediaToCDN];
    }
    
    // Event callback for updating the media stream.
    - (void)onUpdatePublishMediaStream:(NSString *)taskId code:(int)code message:(NSString *)message extraInfo:(NSDictionary *)extraInfo {
        // When you call the publish media stream API (updatePublishMediaStream), the taskId you provide will be returned to you through this callback. It is used to identify which update request the callback belongs to.
        // code: Callback result. 0 means success and other values mean failure.
    }

    Step 4: The anchor stops the live streaming and exits the room.

    - (void)exitRoom {
        // Stop all published media streams.
        [self.trtcCloud stopPublishMediaStream:@""];
        [self.trtcCloud stopLocalAudio];
        [self.trtcCloud stopLocalPreview];
        [self.trtcCloud exitRoom];
    }
    
    // Event callback for stopping media streams.
    - (void)onStopPublishMediaStream:(NSString *)taskId code:(int)code message:(NSString *)message extraInfo:(NSDictionary *)extraInfo {
        // When you call the stop publishing media stream API (stopPublishMediaStream), the taskId you provide will be returned to you through this callback. It is used to identify which stop request the callback belongs to.
        // code: Callback result. 0 means success and other values mean failure.
    }
    
    // Event callback for exiting the room.
    - (void)onExitRoom:(NSInteger)reason {
        if (reason == 0) {
            NSLog(@"Proactively call exitRoom to exit the room.");
        } else if (reason == 1) {
            NSLog(@"Removed from the current room by the server.");
        } else if (reason == 2) {
            NSLog(@"The current room is dissolved.");
        }
    }
    Note:
    To stop publishing media streams, fill in an empty string for taskId. This will stop all the media streams you have published.
    After all resources occupied by the SDK are released, the SDK will throw the onExitRoom callback notification to inform you.

    Advanced Features

    The anchor initiates the cross-room competition.

    1. Either party initiates the cross-room competition.
    - (void)connectOtherRoom:(NSString *)roomId {
        NSMutableDictionary *jsonDict = [[NSMutableDictionary alloc] init];
        // The digit room ID is roomId.
        [jsonDict setObject:roomId forKey:@"strRoomId"];
        [jsonDict setObject:self.userId forKey:@"userId"];
        NSData *jsonData = [NSJSONSerialization dataWithJSONObject:jsonDict options:NSJSONWritingPrettyPrinted error:nil];
        NSString *jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
        [self.trtcCloud connectOtherRoom:jsonString];
    }
    
    // Result callback for requesting cross-room mic-connection.
    - (void)onConnectOtherRoom:(NSString *)userId errCode:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
        // The user ID of the anchor in the other room you want to initiate the cross-room link-up.
        // Error code. ERR_NULL indicates the request is successful.
        // Error message.
    }
    Note:
    Both local and remote users participating in the cross-room competition must be in the anchor role and must have audio or video uplink capabilities.
    2. All users in both rooms will receive a callback indicating that the audio and video streams from the PK anchor in the other room are available.
    - (void)onUserAudioAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes their audio.
        // Under the automatic subscription mode, you do not need to do anything. The SDK will automatically play the remote user's audio.
    }
    
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
        // The remote user publishes/unpublishes the primary video.
        if (available) {
            // Subscribe to the remote user's video stream and bind the video rendering control.
            [self.trtcCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:self.remoteView];
        } else {
            // Unsubscribe to the remote user's video stream and release the rendering control.
            [self.trtcCloud stopRemoteView:userId streamType:TRTCVideoStreamTypeBig];
        }
    }
    3. Either party exits the cross-room competition.
    // Exit the cross-room mic-connection.
    [self.trtcCloud disconnectOtherRoom];
    
    // Result callback for exiting cross-room mic-connection.
    - (void)onDisconnectOtherRoom:(TXLiteAVError)errCode errMsg:(NSString *)errMsg {
    }
    Note:
    After DisconnectOtherRoom() is called, you may exit the cross-room competition with all other room anchors.
    Either the initiator or the receiver can call DisconnectOtherRoom() to exit the cross-room competition.

    Integrate the third-party beauty features.

    TRTC supports integrating third-party beauty effect products. Use the example of Tencent Effect to demonstrate the process of integrating the third-party beauty features.
    1. Integrate the Tencent Effect SDK, and apply for an authorization license. For details, see Integration Preparation for steps.
    2. Set the SDK material resource path (if any).
    NSString *beautyConfigPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
    beautyConfigPath = [beautyConfigPath stringByAppendingPathComponent:@"beauty_config.json"];
    NSFileManager *localFileManager=[[NSFileManager alloc] init];
    BOOL isDir = YES;
    NSDictionary * beautyConfigJson = @{};
    if ([localFileManager fileExistsAtPath:beautyConfigPath isDirectory:&isDir] && !isDir) {
    NSString *beautyConfigJsonStr = [NSString stringWithContentsOfFile:beautyConfigPath encoding:NSUTF8StringEncoding error:nil];
    NSError *jsonError;
    NSData *objectData = [beautyConfigJsonStr dataUsingEncoding:NSUTF8StringEncoding];
    beautyConfigJson = [NSJSONSerialization JSONObjectWithData:objectData
    options:NSJSONReadingMutableContainers
    error:&jsonError];
    }
    NSDictionary *assetsDict = @{@"core_name":@"LightCore.bundle",
    @"root_path":[[NSBundle mainBundle] bundlePath],
    @"tnn_"
    @"beauty_config":beautyConfigJson
    };
    // Initialize the SDK: Width and height are the width and height of the texture, respectively.
    self.xMagicKit = [[XMagic alloc] initWithRenderSize:CGSizeMake(width,height) assetsDict:assetsDict];
    3. Set the video data callback for third-party beauty features. Pass the results of the beauty SDK processing each frame of data into the TRTC SDK for rendering processing.
    // Set the video data callback for third-party beauty features in the TRTC SDK.
    [self.trtcCloud setLocalVideoProcessDelegete:self pixelFormat:TRTCVideoPixelFormat_Texture_2D bufferType:TRTCVideoBufferType_Texture];
    
    #pragma mark - TRTCVideoFrameDelegate
    
    // Construct the YTProcessInput and pass it into the SDK for rendering processing.
    - (uint32_t)onProcessVideoFrame:(TRTCVideoFrame *_Nonnull)srcFrame dstFrame:(TRTCVideoFrame *_Nonnull)dstFrame {
        if (!self.xMagicKit) {
            [self buildBeautySDK:srcFrame.width and:srcFrame.height texture:srcFrame.textureId];// Initialize XMagic SDK
            self.heightF = srcFrame.height;
            self.widthF = srcFrame.width;
        }
        if(self.xMagicKit!=nil && (self.heightF!=srcFrame.height || self.widthF!=srcFrame.width)){
           self.heightF = srcFrame.height;
           self.widthF = srcFrame.width;
          [self.xMagicKit setRenderSize:CGSizeMake(srcFrame.width, srcFrame.height)];
        }
        YTProcessInput *input = [[YTProcessInput alloc] init];
        input.textureData = [[YTTextureData alloc] init];
        input.textureData.texture = srcFrame.textureId;
        input.textureData.textureWidth = srcFrame.width;
        input.textureData.textureHeight = srcFrame.height;
        input.dataType = kYTTextureData;
        YTProcessOutput *output = [self.xMagicKit process:input withOrigin:YtLightImageOriginTopLeft withOrientation:YtLightCameraRotation0];
        dstFrame.textureId = output.textureData.texture;
        return 0;
    }
    Note:
    Steps 1 and 2 vary depending on the different third-party beauty products. And Step 3 is a general and important step for integrating third-party beauty features into TRTC.
    For scenario-specific integration guidelines of Tencent Effect, see Integrating Tencent Effect into TRTC SDK. For guidelines on integrating Tencent Effect independently, see Integrating Tencent Effect SDK.

    Dual-stream encoding mode

    When the dual-stream encoding mode is enabled, the current user's encoder will output two video streams, a high-definition large screen, and a low-definition small screen, at the same time (but only one audio stream). In this way, other users in the room can choose to subscribe to the high-definition large screen or low-definition small screen based on their network conditions or screen sizes.
    1. Enable large-and-small-screen dual-stream encoding mode.
    - (void)enableDualStreamMode:(BOOL)enable {
        // Video encoding parameters for the small-screen stream (customizable).
        TRTCVideoEncParam *smallVideoEncParam = [[TRTCVideoEncParam alloc] init];
        smallVideoEncParam.videoResolution = TRTCVideoResolution_480_270;
        smallVideoEncParam.videoFps = 15;
        smallVideoEncParam.videoBitrate = 550;
        smallVideoEncParam.resMode = TRTCVideoResolutionModePortrait;
        [self.trtcCloud enableEncSmallVideoStream:enable withQuality:smallVideoEncParam];
    }
    Note:
    When the dual-stream encoding mode is enabled, it will consume more CPU and network bandwidth. Therefore, it may be considered for use on Mac, Windows, or high-performance Pads. It is not recommended for mobile devices.
    2. Choose the type of remote user's video stream to pull.
    // Optional video stream types when subscribing to a remote user's video stream.
    [self.trtcCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:view];
    
    // You can switch the size of the specified remote user's screen at any time.
    [self.trtcCloud setRemoteVideoStreamType:userId type:TRTCVideoStreamTypeSmall];
    Note:
    When the dual-stream encoding mode is enabled, you can specify the video stream type as TRTCVideoStreamTypeSmall with streamType to pull a low-quality small video for viewing.

    View rendering control

    If your business involves scenarios of switching display zones, you can use the TRTC SDK to update the local preview screen and update the remote user's video rendering control feature.
    // Update local preview screen rendering control.
    [self.trtcCloud updateLocalView:view];
    
    // Update the remote user's video rendering control.
    [self.trtcCloud updateRemoteView:view streamType:TRTCVideoStreamTypeBig forUser:userId];
    Note:
    The parameter view refers to the target video rendering control. And streamType only supports TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub.

    Live Streaming Interactive Messages

    Live streaming interaction is particularly important in live streaming scenarios. Users interact with the anchor through like messages, gift messages, and bullet screen messages. The precondition for implementing the live interaction feature is to activate the Instant Messaging (IM) service and import the IM SDK. For detailed guidelines, see Voice Chat Room Integration Guide - Preparation for Integration.

    Like message

    1. The liker sends custom group messages related to likes through the client. After it is sended successfully, the business party renders the likes effect locally.
    // Construct the likes message body.
    NSDictionary *msgDict = @{
    @"type": @1, // Like type
    @"likeCount": @10 // Number of likes
    };
    NSDictionary *dataDict = @{
    @"cmd": @"like_msg",
    @"msg": msgDict
    };
    NSError *error;
    NSData *data = [NSJSONSerialization dataWithJSONObject:dataDict options:0 error:&error];
    
    // Send custom group messages (it is recommended that like messages should be set to low priority).
    [[V2TIMManager sharedInstance] sendGroupCustomMessage:data to:groupID priority:V2TIM_PRIORITY_LOW succ:^{
    // Like messages sent successfully.
    // Local rendering of likes effect.
    } fail:^(int code, NSString *desc) {
    // Failed to send like messages.
    }];
    2. Other users in the room receive callback for custom group messages. Then proceed with message parsing and likes effect rendering.
    // Custom group messages received.
    [[V2TIMManager sharedInstance] addSimpleMsgListener:self];
    - (void)onRecvGroupCustomMessage:(NSString *)msgID groupID:(NSString *)groupID sender:(V2TIMGroupMemberInfo *)info customData:(NSData *)data {
    if (data.length > 0) {
    NSError *error;
    NSDictionary *dataDict = [NSJSONSerialization JSONObjectWithData:data options:0 error:&error];
    if (!error) {
    NSString *command = dataDict[@"cmd"];
    NSDictionary *msgDict = dataDict[@"msg"];
    if ([command isEqualToString:@"like_msg"]) {
    NSNumber *type = msgDict[@"type"]; // Likes type.
    NSNumber *likeCount = msgDict[@"likeCount"]; // Number of likes.
    // Render likes effect based on likes type and count.
    }
    } else {
    NSLog(@"Parsing error: %@", error.localizedDescription);
    }
    }
    }

    Gift messages

    1. The sender initiates a request to the business server. Upon completing the billing and settlement, the business server calls the REST API to send a custom message to the group.
    1.1 Request URL sample:
    https://xxxxxx/v4/group_open_http_svc/send_group_msg?sdkappid=88888888&identifier=admin&usersig=xxx&random=99999999&contenttype=json
    1.2 Request packet body sample:
    {
    "GroupId": "@TGS#12DEVUDHQ",
    "Random": 2784275388,
    "MsgPriority": "High", // The priority of the message. Gift messages should be set to high priority.
    "MsgBody": [
    {
    "MsgType": "TIMCustomElem",
    "MsgContent": {
    // type: gift type; giftUrl: gift resource URL; giftName: gift name; giftCount: number of gifts.
    "Data": "{\\"cmd\\": \\"gift_msg\\", \\"msg\\": {\\"type\\": 1, \\"giftUrl\\": \\"xxx\\", \\"giftName\\": \\"xxx\\", \\"giftCount\\": 1}}"
    }
    }
    ]
    }
    2. Other users in the room receive a callback for custom group messages. Then proceed with message parsing and gift effect rendering.
    // Custom group messages received.
    [[V2TIMManager sharedInstance] addSimpleMsgListener:self];
    - (void)onRecvGroupCustomMessage:(NSString *)msgID groupID:(NSString *)groupID sender:(V2TIMGroupMemberInfo *)info customData:(NSData *)data {
    if (data.length > 0) {
    NSError *error;
    NSDictionary *dataDict = [NSJSONSerialization JSONObjectWithData:data options:0 error:&error];
    if (!error) {
    NSString *command = dataDict[@"cmd"];
    NSDictionary *msgDict = dataDict[@"msg"];
    if ([command isEqualToString:@"gift_msg"]) {
    NSNumber *type = msgDict[@"type"]; // Gift type.
    NSNumber *giftCount = msgDict[@"giftCount"]; // Number of gifts.
    NSString *giftUrl = msgDict[@"giftUrl"]; // Gift resource URL.
    NSString *giftName = msgDict[@"giftName"]; // Gift name.
    // Render gift effects based on gift type, count, resource URL, and name.
    }
    } else {
    NSLog(@"Parsing error: %@", error.localizedDescription);
    }
    }
    }

    Bullet screen messages

    Live showroom usually have text-based bullet screen message interactions. This can be achieved through the sending and receiving of group chat regular text messages via IM.
    // Send public screen bullet screen messages.
    [[V2TIMManager sharedInstance] sendGroupTextMessage:text to:groupID priority:V2TIM_PRIORITY_NORMAL succ:^{
    // Successfully sent bullet screen messages.
    // Local display of the message text.
    } fail:^(int code, NSString *desc) {
    // Failed to send bullet screen messages.
    }];
    
    // Receive public screen bullet screen messages.
    [[V2TIMManager sharedInstance] addSimpleMsgListener:self];
    - (void)onRecvGroupTextMessage:(NSString *)msgID groupID:(NSString *)groupID sender:(V2TIMGroupMemberInfo *)info text:(NSString *)text {
    // Rendering bullet screen messages based on sender info and message text.
    }
    Note:
    The recommended priority setting is as follows. Gift messages should be set to high priority. Bullet screen messages should be set to medium priority. Like messages should be set to low priority.
    Sending group chat messages from the client will not trigger the message reception callback. Only other users within the group can receive them.

    Exception Handling

    Exception error handling

    When the TRTC SDK encounters an unrecoverable error, the error will be thrown in the onError callback. For details, see Error Code Table.
    1. UserSig related
    UserSig verification failure will lead to room-entering failure. You can use the UserSig tool for verification.
    Enumeration
    Value
    Description
    ERR_TRTC_INVALID_USER_SIG
    -3320
    Room entry parameter userSig is incorrect. Check if TRTCParams.userSig is empty.
    ERR_TRTC_USER_SIG_CHECK_FAILED
    -100018
    UserSig verification failed. Check if the parameter TRTCParams.userSig is filled in correctly or has expired.
    2. Room entry and exit related
    If failed to enter the room, you should first verify the correctness of the room entry parameters. It is essential that the room entry and exit APIs are called in a paired manner. This means that, even in the event of a failed room entry, the room exit API must still be called.
    Enumeration
    Value
    Description
    ERR_TRTC_CONNECT_SERVER_TIMEOUT
    -3308
    Room entry request timed out. Check if your internet connection is lost or if a VPN is enabled. You may also attempt to switch to 4G for testing.
    ERR_TRTC_INVALID_SDK_APPID
    -3317
    Room entry parameter sdkAppId is incorrect. Check if TRTCParams.sdkAppId is empty.
    ERR_TRTC_INVALID_ROOM_ID
    -3318
    Room entry parameter roomId is incorrect. Check if TRTCParams.roomId or TRTCParams.strRoomId is empty. Note that roomId and strRoomId cannot be used interchangeably.
    ERR_TRTC_INVALID_USER_ID
    -3319
    Room entry parameter userId is incorrect. Check if TRTCParams.userId is empty.
    ERR_TRTC_ENTER_ROOM_REFUSED
    -3340
    Room entry request is denied. Check if enterRoom is called consecutively to enter rooms with the same ID.
    3. Device related
    Errors for relevant monitoring devices. Prompt the user via UI in case of relevant errors.
    Enumeration
    Value
    Description
    ERR_CAMERA_START_FAIL
    -1301
    Failed to open the camera. For example, if there is an exception for the camera's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_MIC_START_FAIL
    -1302
    Failed to open the mic. For example, if there is an exception for the mic's configuration program (driver) on a Windows or macOS device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_CAMERA_NOT_AUTHORIZED
    -1314
    The device of camera is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_MIC_NOT_AUTHORIZED
    -1317
    The device of mic is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_CAMERA_OCCUPY
    -1316
    The camera is occupied. Try a different camera.
    ERR_MIC_OCCUPY
    -1319
    The mic is occupied. This occurs when, for example, the user is currently having a call on the mobile device.

    Issues with the remote mirror mode not functioning properly.

    In TRTC, video mirror settings are divided into local preview mirror setLocalRenderParams and video encoding mirror setVideoEncoderMirror. These settings individually affect the mirror effect of the local preview and the output of the video encoding (the mirror mode affects remote viewers and cloud recordings). If you expect the mirror effect seen in the local preview to also take effect on the remote viewer's end, follow these encoding procedures.
    // Set the rendering parameters for the local video.
    TRTCRenderParams *params = [[TRTCRenderParams alloc] init];
    params.mirrorType = TRTCVideoMirrorTypeEnable; // Video mirror mode
    params.fillMode = TRTCVideoFillMode_Fill; // Video fill mode
    params.rotation = TRTCVideoRotation_0; // Video rotation angle
    [self.trtcCloud setLocalRenderParams:params];
    // Set the video mirror mode for the encoder output.
    [self.trtcCloud setVideoEncoderMirror:YES];

    Issues with camera scale, focus, and switch.

    In live showroom scenarios, the anchor may need to custom adjust the camera settings. The TRTC SDK's device management class provides APIs for these needs.
    1. Query and set the zoom factor for the camera.
    // Get the maximum zoom factor for the camera (only for mobile devices).
    CGFloat zoomRatio = [[self.trtcCloud getDeviceManager] getCameraZoomMaxRatio];
    // Set the zoom factor for the camera (only for mobile devices).
    // Value range is 1 - 5. 1 means the furthest field of view (normal lens), and 5 means the closest field of view (zoom lens). The maximum recommended value is 5, exceeding this may result in blurry video.
    [[self.trtcCloud getDeviceManager] setCameraZoomRatio:zoomRatio];
    2. Set the focus feature and position of the camera.
    // Enable or disable the camera's autofocus feature (only for mobile devices).
    [[self.trtcCloud getDeviceManager] enableCameraAutoFocus:NO];
    // Set the focus position of the camera (only for mobile devices).
    // The precondition for using this API is to first disable the autofocus feature using enableCameraAutoFocus.
    [[self.trtcCloud getDeviceManager] setCameraFocusPosition:CGPointMake(x, y)];
    3. Determine and switch to front or rear cameras.
    // Determine if the current camera is the front camera (only for mobile devices).
    BOOL isFrontCamera = [[self.trtcCloud getDeviceManager] isFrontCamera];
    // Switch to front or rear cameras (only for mobile devices).
    // Incoming true means switching to front, and incoming false means switching to rear.
    [[self.trtcCloud getDeviceManager] switchCamera:!isFrontCamera];
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support