This article will introduce how to quickly integrate the Flutter RTC Engine and implement a basic audio and video call.
Environment preparations
Flutter 2.0 or above.
Developing for Android:
Android Studio 3.5 or above.
Devices with Android 4.1 or above.
Please ensure your project supports CMake version 3.13 and above.
Developing for iOS:
Xcode 11.0 or above.
OS X 10.11 or above.
Please ensure your project is set up with a valid developer signature.
Step 1. Import the SDK
flutter pub add tencent_rtc_sdk
Step 2. Configure the project
1. Grant camera and microphone permissions to enable voice call features.
1. Add requests for camera and mic permissions under the first-level <dict> directory in Info.plist
:
<key>NSCameraUsageDescription</key>
<string>Video calls require camera permission.</string>
<key>NSMicrophoneUsageDescription</key>
<string>Voice calls require microphone permission.</string>
2. Add the field io.flutter.embedded_views_preview
and set its value to YES.
1. Open /android/app/src/main/AndroidManifest.xml
.
2. Add the following permissions:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera.autofocus" />
3. If you need to compile and run on the Android platform, you also need to do the following configuration:
First, add the following to the corresponding location in your project's android/app/build.gradle
file:
android {
.....
packagingOptions {
pickFirst 'liblibliteavsdk.so'
}
buildTypes {
release {
......
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
Create a proguard-rules.pro file in the android/app directory of your project and add the following code to the proguard-rules.pro file:
-keep class com.tencent.** { *; }
Note:
If you encounter any problems during the access process, please refer to FAQs. Step 3. Create a `TRTC` instance
1. Declare member variables
import 'package:tencent_rtc_sdk/trtc_cloud.dart';
import 'package:tencent_rtc_sdk/trtc_cloud_def.dart';
import 'package:tencent_rtc_sdk/trtc_cloud_listener.dart';
late TRTCCloud trtcCloud;
2. Call the initialization interface to create the TRTC instance and set the event callback.
trtcCloud = (await TRTCCloud.sharedInstance())!;
TRTCCloudListener listener = TRTCCloudListener(
onError: (errCode, errMsg) {
}
);
trtcCloud.registerListener(listener);
Step 4. Enter a room
1. If you run the program on an Android device, you need to request CAMERA and MICROPHONE permissions in advance.
if (!(await Permission.camera.request().isGranted) ||
!(await Permission.microphone.request().isGranted)) {
print('You need to obtain audio and video permission to enter');
return;
}
2. In the Tencent RTC Console, click Create Application to obtain the SDKAppID from Application Overview. 3. In UserSig Tools, select SDKAppID from the dropdown, enter your own username (UserID), and click Generate to get your own UserSig. 4. After setting the room parameters TRTCParams, call the enterRoom
interface function to enter the room.
Anchor Role
trtcCloud.enterRoom(
TRTCParams(
sdkAppId: sdkAppId,
userId: "userId",
userSig: '',
role: TRTCRoleType.anchor,
roomId: 123123,
),
TRTCAppScene.live
);
Audience Role
trtcCloud.enterRoom(
TRTCParams(
sdkAppId: sdkAppId,
userId: "userId",
userSig: '',
role: TRTCRoleType.audience,
roomId: 123123,
),
TRTCAppScene.live
);
Note:
If you enter the room as an Audience Role, sdkAppId and roomId need to be the same as those at the anchor end, while userId and userSig need to be replaced with your own values.
Step 5. Enable the camera
1. Add TRTCCloudVideoView in the corresponding position of the build method on the page:
import 'package:tencent_rtc_sdk/trtc_cloud_video_view.dart';
TRTCCloudVideoView(
key: valueKey,
onViewCreated: (viewId) {
localViewId = viewId;
},
),
Note:
viewId
is the unique identifier of the video rendering control TRTCCloudVideoView
. You can store this identifier in any way you like. Here, localViewId
is used to store it for rendering local video streams later.
2. Before invoking the interface startLocalPreview
to enable camera preview, you can set the local preview rendering parameters by calling the interface setLocalRenderParams
.
trtcCloud.setLocalRenderParams(
TRTCRenderParams(
fillMode: TRTCVideoFillMode.fill,
mirrorType: TRTCVideoMirrorType.auto,
rotation: TRTCVideoRotation.rotation0,
),
);
trtcCloud.startLocalPreview(true, localViewId);
trtcCloud.startLocalPreview(false, localViewId);
Call stopLocalPreview
to turn off the camera preview and stop pushing local video information.
trtcCloud.stopLocalPreview();
3. You can call the TXDeviceManager
interface to complete the use of equipment extension features such as "Toggle front/back camera","Set Focus Mode","Flashlight".
import 'package:tencent_rtc_sdk/tx_device_manager.dart';
TXDeviceManager manager = trtcCloud.getDeviceManager();
if (manager.isFrontCamera()) {
manager.switchCamera(false);
} else {
manager.switchCamera(true);
}
TXDeviceManager manager = trtcCloud.getDeviceManager();
if (manager.isAutoFocusEnabled()) {
manager.enableCameraAutoFocus(true);
} else {
manager.enableCameraAutoFocus(false);
}
TXDeviceManager manager = trtcCloud.getDeviceManager();
manager.enableCameraTorch(true);
manager.enableCameraTorch(false);
Step 6. Enable the microphone
You can call startLocalAudio
to enable microphone capture. This interface requires you to determine the capture mode through the quality
parameter. It is recommended to select one of the following modes that suits your project.
trtcCloud.startLocalAudio(TRTCAudioQuality.speech);
trtcCloud.startLocalAudio(TRTCAudioQuality.music);
Call stopLocalAudio
to turn off the mic capture and stop pushing local audio information.
trtcCloud.stopLocalAudio();
Step 7. Play/Stop Video Streams
1. Listen to onUserVideoAvailable before entering the room. When you receive the onUserVideoAvailable(userId, true)
notification, it means that the video frame from this stream has arrived and is ready for playback.
Note:
Here it is assumed that the user who can play the video is denny
, and the video stream of the user denny
is expected to be rendered to the TRTCCloudVideoView
control with the unique identifier remoteViewId
.
2. You can play the remote user's video by calling the startRemoteView
interface.
trtcCloud.startRemoteView("denny", TRTCVideoStreamType.big, remoteViewId
);
Then, you can stop a remote user's video by calling the stopRemoteView
interface, or stop all remote users' videos by calling the stopAllRemoteView
interface.
trtcCloud.stopRemoteView("denny", TRTCVideoStreamType.big);
trtcCloud.stopAllRemoteView();
Step 8. Play/Stop Audio Streams
By default, the SDK will automatically play remote audio, so you don't need to call any API to play it manually.
But if you don't prefer auto-playing audio, you can call muteRemoteAudio/muteAllRemoteAudio
to choose to play or stop remote audio.
trtcCloud.muteRemoteAudio("denny", true);
trtcCloud.muteAllRemoteAudio(true);
trtcCloud.muteRemoteAudio("denny", true);
trtcCloud.muteAllRemoteAudio(true);
Step 9. Exit the room
Call exitRoom
to exit the current room:
TRTC SDK will notify you through the onExitRoom
callback event after the room exit is completed.
TRTCCloudListener listener = TRTCCloudListener(
onExitRoom: (reason) {
}
);
trtcCloud.registerListener(listener);
FAQs
You can see the full list of functions and their descriptions in the API Reference. If you encounter any problems with access and use, please refer to FAQs.
Was this page helpful?