tencent cloud

Feedback

Android

Last updated: 2022-04-26 15:04:26

    Application Scenarios

    TRTC supports four room entry modes. Video call (VideoCall) and audio call (VoiceCall) are the call modes, and interactive video streaming (Live) and interactive audio streaming (VoiceChatRoom) are the live streaming modes.
    The live streaming modes allow a maximum of 100,000 concurrent users in each room with smooth mic on/off. Co-anchoring latency is kept below 300 ms and watch latency below 1,000 ms. The live streaming modes are suitable for use cases such as low-latency interactive live streaming, interactive classrooms for up to 100,000 participants, video dating, online education, remote training, and mega-scale conferencing.

    How It Works

    TRTC services use two types of server nodes: access servers and proxy servers.

    • Access server
      This type of nodes use high-quality lines and high-performance servers and are better suited to drive low-latency end-to-end calls.
    • Proxy server
      This type of servers use mediocre lines and average-performance servers and are better suited to power high-concurrency stream pulling and playback.

    In the live streaming modes, TRTC has introduced the concept of "role". Users are either in the role of "anchor" or "audience". Anchors are assigned to access servers, and audience to proxy servers. Each room allows up to 100,000 users in the role of audience.
    For audience to speak, they must switch the role (switchRole) to “anchor”. The switching process involves users being migrated from proxy servers to access servers. TRTC’s low-latency streaming and smooth mic on/off technologies help keep this process short.

    Sample Code

    You can visit GitHub to obtain the sample code used in this document.

    Note:

    If your access to GitHub is slow, download the ZIP file here.

    Directions

    Step 1. Integrate the SDKs

    You can integrate the TRTC SDK into your project in the following ways:

    Method 1: automatic loading (AAR)

    The TRTC SDK has been released to the mavenCentral repository, and you can configure Gradle to download updates automatically.
    The TRTC SDK has integrated TRTC-API-Example, which offers sample code for your reference. Use Android Studio to open your project and follow the steps below to modify the app/build.gradle file.

    1. Add the TRTC SDK dependency to dependencies.

      dependencies {
        compile 'com.tencent.liteav:LiteAVSDK_TRTC:latest.release'
      }
      
    2. In defaultConfig, specify the CPU architecture to be used by your application.

      Note:

      Currently, the TRTC SDK supports armeabi-v7a, and arm64-v8a.

    defaultConfig {
        ndk {
            abiFilters "armeabi-v7a", "arm64-v8a"
        }
    }
    
    1. Click Sync Now to sync the SDKs.
      If you have no problem connecting to mavenCentral, the SDK will be downloaded and integrated into your project automatically.

    Method 2: manual integration

    You can download the ZIP file of the SDK and integrate it into your project as instructed in SDK Quick Integration > Android.

    Step 2. Configure app permissions

    Add camera, mic, and network permission requests in AndroidManifest.xml.

    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.READ_PHONE_STATE" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.BLUETOOTH" />
    <uses-feature android:name="android.hardware.camera" />
    <uses-feature android:name="android.hardware.camera.autofocus" />
    

    Step 3. Initialize an SDK instance and configure event callbacks

    1. Call the sharedInstance() API to create a TRTCCloud instance.

      // Create a `TRTCCloud` instance
      mTRTCCloud = TRTCCloud.sharedInstance(getApplicationContext());
      mTRTCCloud.setListener(new TRTCCloudListener());
      
    2. Set the attributes of setListener to subscribe to event callbacks and listen for event and error notifications.

      // Error notifications indicate that the SDK has stopped working and therefore must be listened for
      @Override
      public void onError(int errCode, String errMsg, Bundle extraInfo) {
      Log.d(TAG, "sdk callback onError");
      if (activity != null) {
          Toast.makeText(activity, "onError: " + errMsg + "[" + errCode+ "]" , Toast.LENGTH_SHORT).show();
          if (errCode == TXLiteAVCode.ERR_ROOM_ENTER_FAIL) {
              activity.exitRoom();
          }
      }
      }
      

    Step 4. Assemble the room entry parameter TRTCParams

    When calling the enterRoom() API, you need to pass in a key parameter TRTCParams, which includes the following required fields:

    Parameter Type Description Example
    sdkAppId Number Application ID, which you can view in the TRTC console. 1400000123
    userId String Can contain only letters (a-z and A-Z), digits (0-9), underscores, and hyphens. We recommend you set it based on your business account system. test_user_001
    userSig String userSig is calculated based on userId. For the calculation method, please see UserSig. eJyrVareCeYrSy1SslI...
    roomId Number Numeric room ID. For string-type room ID, use strRoomId in TRTCParams. 29834
    Note:

    • In TRTC, users with the same userId cannot be in the same room at the same time as it will cause a conflict.
    • The value of appScene must be the same on each client. Inconsistent appScene may cause unexpected problems.

    Step 5. Enable camera preview and mic capturing

    1. Call startLocalPreview() to enable preview of the local camera. The SDK will ask for camera permission.
    2. Call setLocalViewFillMode() to set the display mode of the local video image:
    • Fill: aspect fill. The image may be scaled up and cropped, but there are no black bars.
    • Fit: aspect fit. The image may be scaled down to ensure that it’s displayed in its entirety, and there may be black bars.
    1. Call setVideoEncoderParam() to set the encoding parameters for the local video, which determine the quality of your video seen by other users in the room.
    2. Call startLocalAudio() to turn the mic on. The SDK will ask for mic permission.
    // Sample code: publish the local audio/video stream
    mTRTCCloud.setLocalViewFillMode(TRTC_VIDEO_RENDER_MODE_FIT);
    mTRTCCloud.startLocalPreview(mIsFrontCamera, localView);
    // Set local video encoding parameters
    TRTCCloudDef.TRTCVideoEncParam encParam = new TRTCCloudDef.TRTCVideoEncParam();
    encParam.videoResolution = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_960_540;
    encParam.videoFps = 15;
    encParam.videoBitrate = 1200;
    encParam.videoResolutionMode = TRTCCloudDef.TRTC_VIDEO_RESOLUTION_MODE_PORTRAIT;
    mTRTCCloud.setVideoEncoderParam(encParam);
    mTRTCCloud.startLocalAudio();
    

    Step 6. Set beauty filters

    1. Call getBeautyManager() to get the beauty filter management class TXBeautyManager.
    2. Call setBeautyStyle() to set the beauty filter style.
    • Smooth: smooth. This style features more obvious skin smoothing effect and is typically used by influencers.
    • Nature: natural. This style retains more facial details and is more natural.
    • Pitu: this style is supported only in the Professional Edition.
    1. Call setBeautyLevel() to set the skin smoothing strength (5 is recommended).
    2. Call setWhitenessLevel() to set the skin brightening strength (5 is recommended).

    Step 7. Create a room and push streams

    1. Set the role field in TRTCParams to TRTCCloudDef.TRTCRoleAnchor to take the role of “anchor”.
    2. Call enterRoom(), specifying appScene, and a room whose ID is the value of the roomId field in TRTCParams will be created.
    • TRTCCloudDef.TRTC_APP_SCENE_LIVE: the interactive video streaming mode, which is used in the example of this document
    • TRTCCloudDef.TRTC_APP_SCENE_VOICE_CHATROOM: the interactive audio streaming mode
    1. After the room is created, start encoding and transferring audio/video data. The SDK will return the onEnterRoom(result) callback. If result is greater than 0, room entry succeeds, and the value indicates the time (ms) room entry takes; if result is less than 0, room entry fails, and the value is the error code for the failure.
    public void enterRoom() {
      TRTCCloudDef.TRTCParams trtcParams = new TRTCCloudDef.TRTCParams();
      trtcParams.sdkAppId = sdkappid;
      trtcParams.userId = userid;
      trtcParams.roomId = 908;
      trtcParams.userSig = usersig;
      mTRTCCloud.enterRoom(trtcParams, TRTCCloudDef.TRTC_APP_SCENE_LIVE);
    }
    @Override
    public void onEnterRoom(long result) {
      if (result > 0) {
          toastTip("Entered room successfully; the total time used is [\(result)] ms")
      } else {
          toastTip("Failed to enter the room; the error code is [\(result)]")
      }
    }
    

    Step 8. Enter the room as audience

    1. Set the role field in TRTCParams to TRTCCloudDef.TRTCRoleAudience to take the role of “audience”.
    2. Call enterRoom() to enter the room whose ID is the value of the roomId field in TRTCParams, specifying appScene.
    • TRTCCloudDef.TRTC_APP_SCENE_LIVE: the interactive video streaming mode, which is used in the example of this document
    • TRTCCloudDef.TRTC_APP_SCENE_VOICE_CHATROOM: the interactive audio streaming mode
    1. Watch the anchor’s video:

    Step 9. Co-anchor

    1. Call switchRole(TRTCCloudDef.TRTCRoleAnchor) to switch the role to “anchor” (TRTCCloudDef.TRTCRoleAnchor).
    2. Call startLocalPreview() to enable preview of the local image.
    3. Call startLocalAudio() to enable mic capturing.
    // Sample code: start co-anchoring
    mTrtcCloud.switchRole(TRTCCloudDef.TRTCRoleAnchor);
    mTrtcCloud.startLocalAudio();
    mTrtcCloud.startLocalPreview(mIsFrontCamera, localView);
    // Sample code: end co-anchoring
    mTrtcCloud.switchRole(TRTCCloudDef.TRTCRoleAudience);
    mTrtcCloud.stopLocalAudio();
    mTrtcCloud.stopLocalPreview();
    

    Step 10. Compete across rooms

    Anchors from two rooms can compete with each other without exiting their current rooms.

    1. Anchor A calls the connectOtherRoom() API. The API uses parameters in JSON strings, so anchor A needs to pass the roomId and userId of anchor B in the format of {"roomId": 978,"userId": "userB"} to the API.
    2. After the cross-room call is set up, anchor A will receive the onConnectOtherRoom() callback, and all users in both rooms will receive the onUserVideoAvailable() and onUserAudioAvailable() callbacks.
      For example, after anchor A in room "001" uses connectOtherRoom() to call anchor B in room “002” successfully, all users in room "001" will receive the onUserVideoAvailable(B, true) and onUserAudioAvailable(B, true) callbacks, and all users in room "002" will receive the onUserVideoAvailable(A, true) and onUserAudioAvailable(A, true) callbacks.
    3. Users in both rooms can call startRemoteView(userId, view) to play the video of the anchor in the other room, and audio will be played automatically.
    // Sample code: cross-room competition
    mTRTCCloud.ConnectOtherRoom(String.format("{\"roomId\":%s,\"userId\":\"%s\"}", roomId, username));
    

    Step 11. Exit the room

    Call exitRoom() to exit the room. The SDK disables and releases devices such as cameras and mics during room exit. Therefore, room exit is not an instant process. It completes only after the onExitRoom() callback is received.

    // Please wait for the `onExitRoom` callback after calling the room exit API.
    mTRTCCloud.exitRoom()
    @Override
    public void onExitRoom(int reason) {
      Log.i(TAG, "onExitRoom: reason = " + reason);
    }
    
    Note:

    If your application integrates multiple audio/video SDKs, please wait after you receive the onExitRoom callback to start other SDKs; otherwise, the device busy error may occur.

    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support