tencent cloud

Feedback

Last updated: 2024-07-18 14:26:14

    Business Process

    This document summarizes some common business processes in one-to-one audio and video calls, helping you better understand the implementation process of the entire scenario.
    Audio Call Process
    Video Call Process
    The following diagram shows the sequence of one-to-one audio call, including processes such as calling, answering, talking, and hanging up.
    
    
    
    The following diagram shows the sequence of one-to-one video call, including processes such as calling, answering, talking, and hanging up.
    
    
    

    Integration Preparations

    Step 1: activate the service

    One-to-one audio and video call scenarios usually require dependencies on two paid PaaS services from the cloud platform, Instant Messaging (IM) and Real-Time Communication (TRTC) for construction.
    1. First, you need to log in to the TRTC Console to create an application. At this time, an IM trial application with the same SDKAppID as the current TRTC application will be automatically created in the Instant Messaging (IM) Console. The accounts and authentication systems for both can be reused. Subsequently, you can choose to upgrade the TRTC or IM application version as needed. For example, the advanced versions can unlock more value-added features and services.
    
    
    
    Note:
    It is recommended to create two separate applications for testing and production environments. Each account (UIN) is provided with 10,000 minutes of free usage per month within one year.
    The TRTC monthly package is divided into Trial Version (by default), Basic Version, and Professional Version, which can unlock different value-added features and services. For details, see Version Features and Monthly Package Description.
    2. Once the application is created, you can find the basic information about it under the Application Management > Application Overview section. It is important to store the SDKAppID and SDKSecretKey for later use and to avoid key leakage to prevent unauthorized traffic usage.
    
    
    

    Step 2: import SDK

    TRTC SDK and IM SDK are now available on CocoaPods. It is recommended to integrate SDKs through CocoaPods.
    1. Install CocoaPods
    Enter the following command in a terminal window (you need to install Ruby on your Mac first):
    sudo gem install cocoapods
    2. Create Podfile File
    Go to the project directory, and enter the following command. A Podfile file will then be created in the project directory.
    pod init
    3. Edit Podfile File
    Choose the appropriate version for your project and edit the Podfile.
    platform :ios, '8.0'
    target 'App' do
    
    # TRTC Lite Version
    # The installation package has the minimum incremental size. It only supports two features of Real-Time Communication (TRTC) and TXLivePlayer for live streaming playback.
    pod 'TXLiteAVSDK_TRTC', :podspec => 'https://liteav.sdk.qcloud.com/pod/liteavsdkspec/TXLiteAVSDK_TRTC.podspec'
    # Add the IM SDK
    pod 'TXIMSDK_Plus_iOS'
    # pod 'TXIMSDK_Plus_iOS_XCFramework'
    # pod 'TXIMSDK_Plus_Swift_iOS_XCFramework'
    # If you need to add the Quic plugin, please uncomment the next line.
    # Note: This plugin must be used with the Objective-C edition or XCFramework edition of the IM SDK, and the plugin version number must match the IM SDK version number.
    # pod 'TXIMSDK_Plus_QuicPlugin'
    
    end
    4. Update and install the SDK
    Enter the following command in the terminal window to update the local repository files and install the SDK.
    pod install
    Or use the following command to update the local repository.
    pod update
    Upon the completion of pod command execution, a project file suffixed with .xcworkspace and integrated with the SDK will be generated. Double-click to open it.
    Note:
    If the pod search fails, it is recommended to try updating the local repo cache of pod. The update command is as follows.
    pod setup
    pod repo update
    rm ~/Library/Caches/CocoaPods/search_index.json
    Besides CocoaPods integration, you can also choose to download the SDK and manually import it. For details, see Manually Integrating the TRTC SDK and Manually Integrating the IM SDK.

    Step 3: project configuration

    1. In one-to-one audio and video call scenarios, the TRTC SDK and IM SDK need to be authorized for microphone and camera permissions. Add the following content to your app's Info.plist. It corresponds to the system's prompt message in the dialog box when microphone and camera permissions are requested:
    Privacy - Microphone Usage Description, and also enter a prompt specifying the purpose of microphone use
    Privacy - Camera Usage Description, and enter a prompt specifying the purpose of camera use.
    
    
    
    2. If you need your App to continue running certain features in the background, go to XCode, select your current project, and under Capabilities, set the setting for Background Modes to ON, and check Audio, AirPlay, and Picture in Picture, as shown below:
    
    
    

    Step 4: authentication credential

    UserSig is a security signature designed by the cloud platform to prevent attackers from accessing your cloud account. Cloud services such as Real-Time Communication (TRTC) and Instant Messaging (IM) adopt this security protection mechanism. Authentication is required for TRTC upon entering a room, and for IM during login.
    Debugging and testing stage: UserSig can be generated through Client Example Code and Console Access, which are only used for debugging and testing.
    Production stage: It is recommended to use the server computing UserSig solution, which has a higher security level and helps prevent the client from being decompiled and reversed, to avoid the risk of key leakage.
    The specific implementation process is as follows:
    1. Before calling the initialization API of the SDK, your app must first request UserSig from your server.
    2. Your server generates the UserSig based on the SDKAppID and UserID.
    3. The server returns the generated UserSig to your app.
    4. Your app sends the obtained UserSig to the SDK through a specific API.
    5. The SDK submits the SDKAppID + UserID + UserSig to the cloud server for verification.
    6. The cloud platform verifies the validity of the UserSig.
    7. Once the verification is passed, it will provide instant communication services to the IM SDK and real-time audio and video services to the TRTC SDK.
    
    
    
    Note:
    The method of generating UserSig locally during the debugging and testing stage is not recommended for the online environment because it may be easily decompiled and reversed, causing key leakage.
    We provide server computation source code for UserSig in multiple programming languages (Java/GO/PHP/Nodejs/Python/C#/C++). For details, see Server Computation of UserSig.

    Step 5: initialize the SDK

    1. Initialize IM SDK and Add Event Listeners
    // Obtain the SDKAppID from the Instant Messaging (IM) console.
    // Add a V2TIMSDKListener event listener. The self is the implementation class of id<V2TIMSDKListener>. This step can be skipped if you do not need to listen for events from the IM SDK.
    [[V2TIMManager sharedInstance] addIMSDKListener:self];
    // Initialize the IM SDK. After calling this API, you can immediately call the log-in API.
    [[V2TIMManager sharedInstance] initSDK:sdkAppID config:config];
    
    // After the SDK is initialized, it will trigger various events, such as connection status and expiration of log-in credentials
    - (void)onConnecting {
    NSLog(@"The IM SDK is connecting to the Cloud Virtual Machine");
    }
    
    - (void)onConnectSuccess {
    NSLog(@"The IM SDK has successfully connected to the Cloud Virtual Machine");
    }
    
    // Remove event listener
    // The self is the implementation class of id<V2TIMSDKListener>
    [[V2TIMManager sharedInstance] removeIMSDKListener:self];
    // Deinitialize the SDK
    [[V2TIMManager sharedInstance] unInitSDK];
    Note:
    If the lifecycle of your application is consistent with the SDK lifecycle, you do not need to deinitialize before exiting the application. However, if you only initialize the SDK when entering a specific interface and no longer use it after exiting that interface, you can deinitialize the SDK.
    2. Create TRTC SDK Instances and Set Event Listeners
    // Create TRTC SDK instance (single instance pattern)
    self.trtcCloud = [TRTCCloud sharedInstance];
    // Set event listeners
    self.trtcCloud.delegate = self;
    
    // Notifications from various SDK events (e.g., error codes, warning codes, audio and video status parameters, etc.)
    - (void)onError:(TXLiteAVError)errCode errMsg:(nullable NSString *)errMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", errCode, errMsg);
    }
    
    - (void)onWarning:(TXLiteAVWarning)warningCode warningMsg:(nullable NSString *)warningMsg extInfo:(nullable NSDictionary *)extInfo {
    NSLog(@"%d: %@", warningCode, warningMsg);
    }
    
    // Remove event listener
    self.trtcCloud.delegate = nil;
    // Destroy TRTC SDK instance (single instance pattern)
    [TRTCCloud destroySharedIntance];
    Note:
    It is recommended to listen to SDK event notifications. Perform log printing and handling for some common errors. For details, see Error Code Table.

    Integration Process

    Step 1: log in

    After the IM SDK is initialized, you need to call the SDK log in API to authenticate your account identity and gain permissions to use features. Before using any other features, ensure you are successfully logged in, or you might encounter feature malfunctions or unavailability. If you only need to use TRTC's audio and video services, you can skip this step.

    Sequence Diagram

    
    
    

    Log in operation

    // Log in: userID can be defined by the user and userSig can be generated as per step 1
    [[V2TIMManager sharedInstance] login:userID userSig:userSig succ:^{
    NSLog(@"success");
    } fail:^(int code, NSString *desc) {
    // The following error codes indicate an expired userSig, and you need to generate a new one for log-in again.
    // 1. ERR_USER_SIG_EXPIRED(6206).
    // 2. ERR_SVR_ACCOUNT_USERSIG_EXPIRED(70001).
    // Note: Do not call the log-in API in case of other error codes. Otherwise, the IM SDK may enter an infinite loop of log-in.
    NSLog(@"failure, code:%d, desc:%@", code, desc);
    }];

    Log out operation

    // Log out
    [[V2TIMManager sharedInstance] logout:^{
    NSLog(@"success");
    } fail:^(int code, NSString *desc) {
    NSLog(@"failure, code:%d, desc:%@", code, desc);
    }];
    Note:
    If the lifecycle of your application matches that of the IM SDK, you do not need to log out before exiting the application. However, if you only use the IM SDK after entering specific interfaces and no longer use it after exiting those interfaces, you can log out and deinitialize the IM SDK.

    Step 2: call

    Sequence Diagram

    
    
    

    Initiate Call

    1. Caller's local video preview (only for video calls; ignore this step for audio calls)
    - (void)setupTRTC {
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
    encParam.videoResolution = TRTCVideoResolution_960_540;
    encParam.videoFps = 15;
    encParam.videoBitrate = 850;
    encParam.resMode = TRTCVideoResolutionModePortrait;
    [self.trtcCloud setVideoEncoderParam:encParam];
    // Enable local camera preview (you can specify to use the front/rear camera for video capture)
    [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.previewView];
    }
    Note:
    You can set the video encoding parameters TRTCVideoEncParam according to business needs. For the best combinations of resolutions and bitrates for each tier, see Resolution and Bitrate Reference Table.
    Call the above APIs before enterRoom. The SDK will only enable the camera preview and will wait until you call enterRoom to start local video streaming.
    2. Caller sends call invitation signaling
    // Construct custom data
    NSDictionary *dic = @{
    @"cmd": @"av_call",
    @"msg": @{
    // Specify the call type (video call, audio call)
    @"callType": @"videoCall",
    // Specify the TRTC room ID (caller can generate it randomly)
    @"roomId": @"xxxRoomId",
    },
    };
    NSData *jsonData = [NSJSONSerialization dataWithJSONObject:dic
    options:NSJSONWritingPrettyPrinted
    error:nil];
    if (jsonData) {
    NSString *jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    // Send call invitation signaling
    [[V2TIMManager sharedInstance] invite:self.receiverId data:jsonString onlineUserOnly:false offlinePushInfo:self.offlinePushInfo timeout:self.timeout succ:^{
    // Successfully send call invitation signaling
    // Render call page, play call ringtone
    } fail:^(int code, NSString *desc) {
    // Failed to send call invitation signaling
    // Prompt call failure, you can try to retry
    }];
    }
    Note:
    In audio and video call scenarios, it is usually necessary to configure offline push information offlinePushInfo in the invitation signaling. For more details, see Offline Push Message.
    It is recommended to set a reasonable timeout parameter timeout in the invitation signaling, in seconds. The SDK will perform timeout detection to realize auto hang up after call timeout.
    3. Callee receives the call invitation notification
    [[V2TIMManager sharedInstance] addSignalingListener:self];
    
    #pragma mark - V2TIMSignalingListener
    
    // Callee receives the call request. The inviteID is the request ID, and inviter is the caller ID
    - (void)onReceiveNewInvitation:(NSString *)inviteID inviter:(NSString *)inviter groupID:(NSString *)groupID inviteeList:(NSArray<NSString *> *)inviteeList data:(NSString *)data {
    if (data && ![data isEqualToString:@""]) {
    NSData *jsonData = [data dataUsingEncoding:NSUTF8StringEncoding];
    NSDictionary *dictionary = [NSJSONSerialization JSONObjectWithData:jsonData
    options:NSJSONReadingMutableContainers
    error:nil];
    if (dictionary) {
    NSString *command = dictionary[@"cmd"];
    NSDictionary *msg = dictionary[@"msg"];
    if ([command isEqualToString:@"av_call"]) {
    NSString *callType = msg[@"callType"];
    NSString *roomId = msg[@"roomId"];
    // Render call page, play call ringtone
    }
    }
    }
    }
    Note:
    Caller initiates a call request. When the callee receives the call request, the business side needs to implement the rendering of the call page and the playing of the call ringtone on its own.
    4. Callee's local video preview (only for video calls; ignore this step for audio calls)
    if ([callType isEqualToString:@"videoCall"]) {
    // Set video encoding parameters to determine the picture quality seen by remote users
    TRTCVideoEncParam *encParam = [[TRTCVideoEncParam alloc] init];
    encParam.videoResolution = TRTCVideoResolution_960_540;
    encParam.videoFps = 15;
    encParam.videoBitrate = 850;
    encParam.resMode = TRTCVideoResolutionModePortrait;
    [self.trtcCloud setVideoEncoderParam:encParam];
    
    // Enable local camera preview (you can specify to use the front/rear camera for video capture)
    [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.previewView];
    }

    Cancel Call

    1. Caller cancels the call request
    [[V2TIMManager sharedInstance] cancel:inviteId data:data succ:^{
    // Successfully cancel the call request
    // Terminate the call page, and stop the call ringtone
    } fail:^(int code, NSString *desc) {
    // Failed to cancel call request
    // Prompt cancel failure, and you can retry
    }];
    2. Callee receives the cancellation notification
    #pragma mark - V2TIMSignalingListener
    
    - (void)onInvitationCancelled:(NSString *)inviteID inviter:(NSString *)inviter data:(NSString *)data {
    // Terminate the call page, and stop the call ringtone
    }

    Call Timeout

    Both caller and callee will receive a timeout notification. They also terminate the call page, and stop the call ringtone.
    #pragma mark - V2TIMSignalingListener
    
    - (void)onInvitationTimeout:(NSString *)inviteID inviteeList:(NSArray<NSString *> *)inviteeList {
    // Prompt call timeout. Terminate the call page, and stop the call ringtone
    }

    Step 3: answer

    Answer signaling

    1. Callee sends answer signaling
    [[V2TIMManager sharedInstance] accept:inviteId data:data succ:^{
    // Answer successfully, render the call page, and stop the call ringtone
    if ([callType isEqualToString:@"videoCall"]) {
    // Start video call
    [self startVideoCall];
    } else {
    // Start audio call
    [self startAudioCall];
    }
    } fail:^(int code, NSString *desc) {
    // Answer failed, prompt for exception or retry
    }];
    2. Caller receives answer notification
    #pragma mark - V2TIMSignalingListener
    
    - (void)onInviteeAccepted:(NSString *)inviteID invitee:(NSString *)invitee data:(NSString *)data {
    if ([self.callType isEqualToString:@"videoCall"]) {
    // Start video call
    [self startVideoCall];
    } else {
    // Start audio call
    [self startAudioCall];
    }
    }

    Audio Call

    1. Both caller and callee enter the same TRTC room to start an audio call.
    - (void)startAudioCall {
    TRTCParams *params = [[TRTCParams alloc] init];
    // TRTC authentication credential, generated on the server
    params.sdkAppId = SDKAPPID;
    // TRTC application ID, obtained from the console
    params.userSig = USERSIG;
    // Take the room ID string as an example
    params.strRoomId = self.roomId;
    // Username, it is recommended to stay sync with IM
    params.userId = self.userId;
    [self.trtcCloud startLocalAudio:TRTCAudioQualitySpeech];
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneAudioCall];
    }
    Note:
    In audio call mode, the TRTC room entry scenario should select TRTCAppSceneAudioCall, and there is no need to specify a room entry role TRTCRoleType.
    Starting local audio capture startLocalAudio allows you to set audio quality parameters at the same time. For audio call modes, it is recommended to select TRTCAudioQualitySpeech.
    Under the SDK's default auto subscription mode, after a user enters a room, they will immediately receive the audio stream from that room, which will be automatically decoded and played without manual pulling.
    2. Notification of room entry result, indicates call status.
    // Mark whether the call is in progress
    @property (nonatomic, assign) BOOL isOnCalling;
    
    #pragma mark - TRTCCloudDelegate
    
    // Event callback for the result of entering the room
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // Room entry successful, indicates that the call is in progress
    self.isOnCalling = YES;
    } else {
    // Failed to enter the room, prompt for call exception
    self.isOnCalling = NO;
    }
    }

    Video Call

    1. Both caller and callee enter the same TRTC room to start a video call.
    - (void)startVideoCall {
    TRTCParams *params = [[TRTCParams alloc] init];
    // TRTC authentication credential, generated on the server
    params.sdkAppId = SDKAPPID;
    // TRTC application ID, obtained from the console
    params.userSig = USERSIG;
    // Take the room ID string as an example
    params.strRoomId = self.roomId;
    // Username, it is recommended to stay sync with IM
    params.userId = self.userId;
    [self.trtcCloud startLocalAudio:TRTCAudioQualitySpeech];
    [self.trtcCloud enterRoom:params appScene:TRTCAppSceneVideoCall];
    }
    Note:
    In video call mode, the TRTC room entry scenario should use TRTCAppSceneVideoCall, and there's no need to specify the room entry role TRTCRoleType.
    Starting local audio capture startLocalAudio allows you to set audio quality parameters at the same time. For video call modes, it is recommended to select TRTCAudioQualitySpeech.
    In the SDK's default automatic subscription mode, audio is automatically decoded and played back, while video requires manual invocation of startRemoteView to pull and render the remote video stream.
    2. Notification of room entry result, indicates call status. Pull remote video stream.
    // Mark whether the call is in progress
    @property (nonatomic, assign) BOOL isOnCalling;
    
    #pragma mark - TRTCCloudDelegate
    
    // Event callback for the result of entering the room
    - (void)onEnterRoom:(NSInteger)result {
    if (result > 0) {
    // Room entry successful, indicates that the call is in progress
    self.isOnCalling = YES;
    } else {
    // Failed to enter the room, prompt for call exception
    self.isOnCalling = NO;
    }
    }
    
    // Pull remote video stream
    - (void)onUserVideoAvailable:(NSString *)userId available:(BOOL)available {
    // The remote user publishes/unpublishes the primary video
    if (available) {
    // Subscribe to the remote user's video stream and bind the video rendering control
    [self.trtcCloud startRemoteView:userId streamType:TRTCVideoStreamTypeBig view:self.previewView];
    } else {
    // Unsubscribe to the remote user's video stream and release the rendering control
    [self.trtcCloud stopRemoteView:userId streamType:TRTCVideoStreamTypeBig];
    }
    }

    Step 4: reject call

    Sequence Diagram

    
    
    

    Proactive Rejection

    1. Callee sends rejection signal
    NSDictionary *dic = @{
    @"cmd": @"av_call",
    @"msg": @{
    // Specify the call type (video call, audio call)
    @"callType": @"videoCall",
    // Specify rejection type (Proactive Rejection, Busy Line Rejection)
    @"reason": @"active",
    },
    };
    NSData *jsonData = [NSJSONSerialization dataWithJSONObject:dic
    options:NSJSONWritingPrettyPrinted
    error:nil];
    if (jsonData) {
    NSString *jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    [[V2TIMManager sharedInstance] reject:self.inviteId data:jsonString succ:^{
    // Rejection successful. Terminate the call page, and stop the call ringtone
    } fail:^(int code, NSString *desc) {
    // Rejection failed, prompt for exception or retry
    }];
    }
    2. Caller receives rejection notification
    #pragma mark - V2TIMSignalingListener
    
    - (void)onInviteeRejected:(NSString *)inviteID invitee:(NSString *)invitee data:(NSString *)data {
    if (data && ![data isEqualToString:@""]) {
    NSData *jsonData = [data dataUsingEncoding:NSUTF8StringEncoding];
    NSDictionary *dictionary = [NSJSONSerialization JSONObjectWithData:jsonData
    options:NSJSONReadingMutableContainers
    error:nil];
    if (dictionary) {
    NSString *command = dictionary[@"cmd"];
    NSDictionary *msg = dictionary[@"msg"];
    if ([command isEqualToString:@"av_call"]) {
    NSString *reason = msg[@"reason"];
    if ([reason isEqualToString:@"active"]) {
    // Prompt that the other party rejects call
    } else if ([reason isEqualToString:@"busy"]) {
    // Prompt that the other party is busy
    }
    // Terminate the call page, and stop the call ringtone
    }
    }
    }
    }

    Busy Line Rejection

    Callee receives a new call invitation, if the local call status is in a call, the caller automatically rejects the call.
    - (void)onReceiveNewInvitation:(NSString *)inviteID inviter:(NSString *)inviter groupID:(NSString *)groupID inviteeList:(NSArray<NSString *> *)inviteeList data:(NSString *)data {
    if (data && ![data isEqualToString:@""]) {
    NSData *jsonData = [data dataUsingEncoding:NSUTF8StringEncoding];
    NSDictionary *dictionary = [NSJSONSerialization JSONObjectWithData:jsonData
    options:NSJSONReadingMutableContainers
    error:nil];
    if (dictionary) {
    NSString *command = dictionary[@"cmd"];
    NSDictionary *msg = dictionary[@"msg"];
    if ([command isEqualToString:@"av_call"] && self.isOnCalling) {
    NSDictionary *dic = @{
    @"cmd": @"av_call",
    @"msg": @{
    // Specify the call type (video call, audio call)
    @"callType": @"videoCall",
    // Specify rejection type (Proactive Rejection, Busy Line Rejection)
    @"reason": @"busy",
    },
    };
    NSData *jsonData = [NSJSONSerialization dataWithJSONObject:dic
    options:NSJSONWritingPrettyPrinted
    error:nil];
    if (jsonData) {
    NSString *jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    // Local call is in progress, and sends busy line rejection signal
    [[V2TIMManager sharedInstance] reject:inviteID data:jsonString succ:^{
    // Busy line rejection successful
    } fail:^(int code, NSString *desc) {
    // Busy line rejection failed
    }];
    }
    }
    }
    }
    }
    Note:
    Both proactive rejection and busy line rejection use the reject signaling for implementation, but it's important to distinguish them through the reason field of the custom data in the signaling.

    Step 5: hang up

    Sequence Diagram

    
    
    

    Hang Up Call

    1. Either party exits the room, and reset the local call status.
    - (void)hangup {
    [self.trtcCloud stopLocalAudio];
    [self.trtcCloud stopLocalPreview];
    [self.trtcCloud exitRoom];
    }
    
    #pragma mark - TRTCCloudDelegate
    
    - (void)onExitRoom:(NSInteger)reason {
    // Successfully exited the room and hung up the call
    self.isOnCalling = NO;
    }
    2. The other party receives a notification that the remote side has exited the room, locally executes to exit room and resets the call status.
    #pragma mark - TRTCCloudDelegate
    
    - (void)onRemoteUserLeaveRoom:(NSString *)userId reason:(NSInteger)reason {
    [self hangup];
    }
    
    - (void)onExitRoom:(NSInteger)reason {
    // Successfully exited the room and hung up the call
    self.isOnCalling = NO;
    }

    Step 6: feature control

    Turn on/off microphone

    // Turn the mic on
    [self.trtcCloud muteLocalAudio:NO];
    // Turn the mic off
    [self.trtcCloud muteLocalAudio:YES];

    Turn on/off speaker

    // Turn the speaker on
    [self.trtcCloud muteAllRemoteAudio:NO];
    // Turn the speaker off
    [self.trtcCloud muteAllRemoteAudio:YES];

    Turn on/off camera

    // Turn the camera on, specifying front or rear camera and the rendering control
    [self.trtcCloud startLocalPreview:self.isFrontCamera view:self.previewView];
    // Turn the camera off
    [self.trtcCloud stopLocalPreview];

    Hands-free/Earpiece Switching

    // Switch to earpiece
    [[self.trtcCloud getDeviceManager] setAudioRoute:TXAudioRouteEarpiece];
    // Switch to speakerphone
    [[self.trtcCloud getDeviceManager] setAudioRoute:TXAudioRouteSpeakerphone];

    Camera Switching

    // Determine if the current camera is front-facing
    BOOL isFrontCamera = [[self.trtcCloud getDeviceManager] isFrontCamera];
    // Switch between front and rear cameras, true: switch to front-facing; false: switch to rear-facing
    [[self.trtcCloud getDeviceManager] switchCamera:!isFrontCamera];

    Advanced Features

    Network Status Prompt

    During audio and video calls, it is often necessary to prompt when the other party's network status is poor, thereby creating an expectation of call lag.
    #pragma mark - TRTCCloudDelegate
    
    - (void)onNetworkQuality:(TRTCQualityInfo *)localQuality remoteQuality:(NSArray<TRTCQualityInfo *> *)remoteQuality {
    if (remoteQuality.count > 0) {
    switch(remoteQuality[0].quality) {
    case TRTCQuality_Excellent:
    NSLog(@"The other party's network is very good");
    break;
    case TRTCQuality_Good:
    NSLog(@"The other party's network is quite good");
    break;
    case TRTCQuality_Poor:
    NSLog(@"The other party's network is average");
    break;
    case TRTCQuality_Bad:
    NSLog(@"The other party's network is relatively poor");
    break;
    case TRTCQuality_Vbad:
    NSLog(@"The other party's network is very poor");
    break;
    case TRTCQuality_Down:
    NSLog(@"The other party's network is extremely poor");
    break;
    default:
    NSLog(@"Undefined ");
    break;
    }
    }
    }
    Note:
    localQuality represents the local user network quality assessment result, and its userId field is empty.
    remoteQuality represents the remote user network quality assessment result, which is influenced by factors on both the remote and local sides.

    Call Duration Statistics

    It is recommended to use the time when a remote user joins the TRTC room as the start time for calculating call duration, and the time when the local user exits the room as the end time for calculating call duration.
    // Start call time
    @property (nonatomic, assign) NSTimeInterval callStartTime;
    // End call time
    @property (nonatomic, assign) NSTimeInterval callFinishTime;
    // Call duration (seconds)
    @property (nonatomic, assign) NSInteger callDuration;
    
    // Callback for remote user entering room
    - (void)onRemoteUserEnterRoom:(NSString *)userId {
    self.callStartTime = [[NSDate date] timeIntervalSince1970];
    }
    
    // Callback for local user exiting room
    - (void)onExitRoom:(NSInteger)reason {
    self.callFinishTime = [[NSDate date] timeIntervalSince1970];
    self.callDuration = (NSInteger)(self.callFinishTime - self.callStartTime);
    }
    Note:
    In cases of exceptions such as forced closure or network disconnection, the client may not be able to log the relevant times. These can be monitored through Server Event Callback to track events of entering and exiting the room and calculate the duration of the call.

    Video Beauty Effects

    TRTC supports integrating third-party beauty effect products. Use the example of Special Effect to demonstrate the process of integrating the third-party beauty features.
    1. Integrate Special Effect SDK, and apply for an authorization license. For details, see Live Show Streaming - Integration Preparation for steps.
    2. Set the SDK material resource path (if any).
    NSString *beautyConfigPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
    beautyConfigPath = [beautyConfigPath stringByAppendingPathComponent:@"beauty_config.json"];
    NSFileManager *localFileManager=[[NSFileManager alloc] init];
    BOOL isDir = YES;
    NSDictionary * beautyConfigJson = @{};
    if ([localFileManager fileExistsAtPath:beautyConfigPath isDirectory:&isDir] && !isDir) {
    NSString *beautyConfigJsonStr = [NSString stringWithContentsOfFile:beautyConfigPath encoding:NSUTF8StringEncoding error:nil];
    NSError *jsonError;
    NSData *objectData = [beautyConfigJsonStr dataUsingEncoding:NSUTF8StringEncoding];
    beautyConfigJson = [NSJSONSerialization JSONObjectWithData:objectData
    options:NSJSONReadingMutableContainers
    error:&jsonError];
    }
    NSDictionary *assetsDict = @{@"core_name":@"LightCore.bundle",
    @"root_path":[[NSBundle mainBundle] bundlePath],
    @"tnn_"
    @"beauty_config":beautyConfigJson
    };
    // Initialize SDK: Width and height are the width and height of the texture respectively
    self.xMagicKit = [[XMagic alloc] initWithRenderSize:CGSizeMake(width,height) assetsDict:assetsDict];
    3. Set the video data callback for third-party beauty features. Pass the results of the beauty SDK processing each frame of data into the TRTC SDK for rendering processing.
    // Set the video data callback for third-party beauty features in the TRTC SDK
    [self.trtcCloud setLocalVideoProcessDelegete:self pixelFormat:TRTCVideoPixelFormat_Texture_2D bufferType:TRTCVideoBufferType_Texture];
    
    #pragma mark - TRTCVideoFrameDelegate
    
    // Construct the YTProcessInput and pass it into the SDK for rendering processing
    - (uint32_t)onProcessVideoFrame:(TRTCVideoFrame *_Nonnull)srcFrame dstFrame:(TRTCVideoFrame *_Nonnull)dstFrame {
    if (!self.xMagicKit) {
    [self buildBeautySDK:srcFrame.width and:srcFrame.height texture:srcFrame.textureId];// Initialize the XMagic SDK.
    self.heightF = srcFrame.height;
    self.widthF = srcFrame.width;
    }
    if(self.xMagicKit!=nil && (self.heightF!=srcFrame.height || self.widthF!=srcFrame.width)){
    self.heightF = srcFrame.height;
    self.widthF = srcFrame.width;
    [self.xMagicKit setRenderSize:CGSizeMake(srcFrame.width, srcFrame.height)];
    }
    YTProcessInput *input = [[YTProcessInput alloc] init];
    input.textureData = [[YTTextureData alloc] init];
    input.textureData.texture = srcFrame.textureId;
    input.textureData.textureWidth = srcFrame.width;
    input.textureData.textureHeight = srcFrame.height;
    input.dataType = kYTTextureData;
    YTProcessOutput *output = [self.xMagicKit process:input withOrigin:YtLightImageOriginTopLeft withOrientation:YtLightCameraRotation0];
    dstFrame.textureId = output.textureData.texture;
    return 0;
    }
    Note:
    Steps 1 and 2 vary depending on the different third-party beauty products, while Step 3 is a general and important step for integrating third-party beauty features into TRTC.
    For scenario-specific integration guidelines of beauty effects, see Integrating Special Effect into TRTC SDK. For guidelines on integrating beauty effects independently, see Integrating Special Effect SDK.

    Window Size Switching

    In TRTC, there are many APIs that require you to control the video screen. All these APIs require you to specify a video rendering control.
    1. If your business involves scenarios of switching display zones, you can use the TRTC SDK to update the local preview screen and update the remote user's video rendering control feature.
    // Update local preview screen rendering control
    [self.trtcCloud updateLocalView:self.previewView];
    
    // Update the remote user's video rendering control
    [self.trtcCloud updateRemoteView:self.previewView streamType:TRTCVideoStreamTypeBig forUser:self.userId];
    Note:
    streamType only supports TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub.

    Offline Push Message

    In audio/video call scenarios, the offline push message feature is usually necessary, allowing the called user's App to receive new incoming call messages even when it's not online. For detailed guidance on integrating offline push, see Offline Message Push. Below, we will focus on explaining the implementation of step 5: Send Offline Push Message, and step 6: Parse Offline Push Messages.

    Send Offline Push Message

    When sending a call invitation using invite, you can set offline push parameters through V2TIMOfflinePushInfo. By calling ext of V2TIMOfflinePushInfo to set custom ext data, when the user receives an offline push message to start the App, they can obtain the ext field in the system callback, and then redirect to the specified UI interface based on the content of the ext field.
    NSDictionary *dic = @{
    @"cmd": @"av_call",
    @"msg": @{
    // Specify the call type (video call, audio call)
    @"callType": @"videoCall",
    // Specify the TRTC room ID (caller can generate it randomly)
    @"roomId": @"xxxRoomId",
    },
    };
    NSData *jsonData = [NSJSONSerialization dataWithJSONObject:dic options:NSJSONWritingPrettyPrinted error:nil];
    NSString *jsonString = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
    
    V2TIMOfflinePushInfo *pushInfo = [[V2TIMOfflinePushInfo alloc] init];
    pushInfo.title = self.nickName;
    pushInfo.desc = @"You have a new call invitation";
    NSDictionary *ext = @{
    @"entity" : @{
    @"action" : @1,
    @"content" : jsonString,
    @"sender" : self.senderId,
    @"nickname" : self.nickName,
    @"faceUrl" : faceUrl,
    }
    };
    NSData *data = [NSJSONSerialization dataWithJSONObject:ext options:NSJSONWritingPrettyPrinted error:nil];
    pushInfo.ext = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
    pushInfo.iOSSound = @"phone_ringing.mp3";
    // Below are fields compatible with Android, and need to be filled in
    pushInfo.AndroidOPPOChannelID = @"tuikit";
    pushInfo.AndroidSound = @"phone_ringing";
    pushInfo.AndroidHuaWeiCategory = @"IM";
    pushInfo.AndroidVIVOCategory = @"IM";
    
    [[V2TIMManager sharedInstance] invite:@"receiverId" data:jsonString onlineUserOnly:false offlinePushInfo:pushInfo timeout:self.timeout succ:^{
    // Successfully send call invitation signaling
    } fail:^(int code, NSString *desc) {
    // Failed to send call invitation signaling
    }];

    Parse Offline Push Messages

    When a user receives an offline push message to start the App, they can obtain the ext field in the AppDelegate -> didReceiveRemoteNotification system callback, and then redirect to the specified UI interface based on the content of the ext field.
    // After starting the APP, you will receive the following callbacks
    - (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo
    fetchCompletionHandler:(void (^)(UIBackgroundFetchResult result))completionHandler {
    // Parse push extension fields
    if ([userInfo[@"ext"]) {
    // Redirect to the specified UI interface
    }
    }

    Exception Handling

    TRTC exception error handling

    When the TRTC SDK encounters an unrecoverable error, the error is thrown in the onError callback. For details, see Error Code Table.

    UserSig related

    UserSig verification failure leads to room-entering failure. You can use the UserSig tool for verification.
    Enumeration
    Value
    Description
    ERR_TRTC_INVALID_USER_SIG
    -3320
    Room entry parameter userSig is incorrect. Check if TRTCParams.userSig is empty.
    ERR_TRTC_USER_SIG_CHECK_FAILED
    -100018
    UserSig verification failed. Check if the parameter TRTCParams.userSig is filled in correctly or has expired.

    Room entry and exit related

    If room entry is failed, you should first verify the correctness of the room entry parameters. It is essential that the room entry and exit APIs are called in a paired manner. This means that, even in the event of a failed room entry, the room exit API must still be called.
    Enumeration
    Value
    Description
    ERR_TRTC_CONNECT_SERVER_TIMEOUT
    -3308
    Room entry request timed out. Check if your internet connection is lost or if a VPN is enabled. You may also attempt to switch to 4G for testing.
    ERR_TRTC_INVALID_SDK_APPID
    -3317
    Room entry parameter sdkAppId is incorrect. Check if TRTCParams.sdkAppId is empty
    ERR_TRTC_INVALID_ROOM_ID
    -3318
    Room entry parameter roomId is incorrect.Check if TRTCParams.roomId or TRTCParams.strRoomId is empty. Nnote that roomId and strRoomId cannot be used interchangeably.
    ERR_TRTC_INVALID_USER_ID
    -3319
    Room entry parameter userId is incorrect. Check if TRTCParams.userId is empty.
    ERR_TRTC_ENTER_ROOM_REFUSED
    -3340
    Room entry request was denied. Check if enterRoom is called consecutively to enter rooms with the same ID.

    Device related

    Errors for related monitoring devices. Prompt the user via UI in case of relevant errors.
    Enumeration
    Value
    Description
    ERR_CAMERA_START_FAIL
    -1301
    Failed to enable the camera. For example, if there is an exception for the camera's configuration program (driver) on a Windows or Mac device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_MIC_START_FAIL
    -1302
    Failed to open the mic. For example, if there is an exception for the camera's configuration program (driver) on a Windows or Mac device, you should try disabling then re-enabling the device, restarting the machine, or updating the configuration program.
    ERR_CAMERA_NOT_AUTHORIZED
    -1314
    The device of camera is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_MIC_NOT_AUTHORIZED
    -1317
    The device of mic is unauthorized. This typically occurs on mobile devices and may be due to the user having denied the permission.
    ERR_CAMERA_OCCUPY
    -1316
    The camera is occupied. Try a different camera.
    ERR_MIC_OCCUPY
    -1319
    The mic is occupied. This occurs when, for example, the user is currently having a call on the mobile device.

    Offline push cannot be received for normal messages

    First, check whether the app runtime environment is the same as the certificate environment. Otherwise, offline push messages will not be received.
    Secondly, check if the app and certificate environment are set to production. If it is a development environment, applying for a deviceToken from Apple may fail. This problem is not found in the production environment, you can switch to the production environment for testing.

    Offline push cannot be received for custom messages

    The offline push for custom messages is different from that for normal messages. As we cannot parse the content of custom messages, we cannot determine the push's content. Therefore, by default, there is no offline push. If you need an offline push, you need to set the desc field in offlinePushInfo when using sendMessage, and the desc information will by default be displayed during the push.

    Disable the reception of offline push messages

    To disable the reception of offline push messages, you can set the config parameter of setAPNS API to nil. This feature is supported starting from version 5.6.1200.

    Failed to receive push and a bad devicetoken error was reported in the background

    The Apple deviceToken depends on the current compiling environment. If the certificate ID and token used to log in to IMSDK and upload the deviceToken to the cloud platform are inconsistent, it will result in an error.
    If you use the Release environment to compile, the - application:didRegisterForRemoteNotificationsWithDeviceToken: callback will return a production environment token. At this time, the businessID needs to be set to the Certificate ID of the production environment.
    If you use the Debug environment to compile, the - application:didRegisterForRemoteNotificationsWithDeviceToken: callback will return a development environment token. At this time, the businessID needs to be set to the Certificate ID of the development environment.
    V2TIMAPNSConfig *confg = [[V2TIMAPNSConfig alloc] init];
    /* You need to register a developer certificate with Apple, download and generate the certificate (p12 file) in the developer accounts, and upload the generated p12 file to the Certificate Management console. The console will automatically generate a certificate ID and pass it to the following busiId parameter.*/
    // Push certificate ID
    confg.businessID = sdkBusiId;
    confg.token = self.deviceToken;
    [[V2TIMManager sharedInstance] setAPNS:confg succ:^{
    NSLog(@"%s, succ, %@", __func__, supportTPNS ? @"TPNS": @"APNS");
    } fail:^(int code, NSString *msg) {
    NSLog(@"%s, fail, %d, %@", __func__, code, msg);
    }];

    In the iOS development environment, deviceToken is occasionally not returned for registration or APNs fails to request a token

    This problem is caused by instability of APNs. You can resolve the problem in the following ways:
    1. Insert a SIM card into the phone and use the 4G network to test.
    2. Uninstall and reinstall the application, restart the application, or shut down and restart the phone.
    3. Use a package for the production environment.
    4. Use another phone with iOS system.
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support