FuncList | DESC |
Create TRTCCloud instance (singleton mode) | |
Terminate TRTCCloud instance (singleton mode) | |
Add TRTC event callback | |
Remove TRTC event callback | |
Set the queue that drives the TRTCCloudListener event callback | |
Enter room | |
Exit room | |
Switch role | |
Switch role(support permission credential) | |
Switch room | |
Request cross-room call | |
Exit cross-room call | |
Set subscription mode (which must be set before room entry for it to take effect) | |
Create room subinstance (for concurrent multi-room listen/watch) | |
Terminate room subinstance | |
| |
Publish a stream | |
Modify publishing parameters | |
Stop publishing | |
Enable the preview image of local camera (mobile) | |
Update the preview image of local camera | |
Stop camera preview | |
Pause/Resume publishing local video stream | |
Set placeholder image during local video pause | |
Subscribe to remote user's video stream and bind video rendering control | |
Update remote user's video rendering control | |
Stop subscribing to remote user's video stream and release rendering control | |
Stop subscribing to all remote users' video streams and release all rendering resources | |
Pause/Resume subscribing to remote user's video stream | |
Pause/Resume subscribing to all remote users' video streams | |
Set the encoding parameters of video encoder | |
Set network quality control parameters | |
Set the rendering parameters of local video image | |
Set the rendering mode of remote video image | |
Enable dual-channel encoding mode with big and small images | |
Switch the big/small image of specified remote user | |
Screencapture video | |
Sets perspective correction coordinate points. | |
Set the adaptation mode of gravity sensing (version 11.7 and above) | |
Enable local audio capturing and publishing | |
Stop local audio capturing and publishing | |
Pause/Resume publishing local audio stream | |
Pause/Resume playing back remote audio stream | |
Pause/Resume playing back all remote users' audio streams | |
Set audio route | |
Set the audio playback volume of remote user | |
Set the capturing volume of local audio | |
Get the capturing volume of local audio | |
Set the playback volume of remote audio | |
Get the playback volume of remote audio | |
Enable volume reminder | |
Start audio recording | |
Stop audio recording | |
Start local media recording | |
Stop local media recording | |
Set the parallel strategy of remote audio streams | |
Enable 3D spatial effect | |
Update self position and orientation for 3D spatial effect | |
Update the specified remote user's position for 3D spatial effect | |
Set the maximum 3D spatial attenuation range for userId's audio stream | |
Get device management class (TXDeviceManager) | |
Get beauty filter management class (TXBeautyManager) | |
Add watermark | |
Get sound effect management class (TXAudioEffectManager) | |
Enable system audio capturing | |
Stop system audio capturing(iOS not supported) | |
Start screen sharing | |
Stop screen sharing | |
Pause screen sharing | |
Resume screen sharing | |
Set the video encoding parameters of screen sharing (i.e., substream) (for desktop and mobile systems) | |
Enable/Disable custom video capturing mode | |
Deliver captured video frames to SDK | |
Enable custom audio capturing mode | |
Deliver captured audio data to SDK | |
Enable/Disable custom audio track | |
Mix custom audio track into SDK | |
Set the publish volume and playback volume of mixed custom audio track | |
Generate custom capturing timestamp | |
Set video data callback for third-party beauty filters | |
Set the callback of custom rendering for local video | |
Set the callback of custom rendering for remote video | |
Set custom audio data callback | |
Set the callback format of audio frames captured by local mic | |
Set the callback format of preprocessed local audio frames | |
Set the callback format of audio frames to be played back by system | |
Enabling custom audio playback | |
Getting playable audio data | |
Use UDP channel to send custom message to all users in room | |
Use SEI channel to send custom message to all users in room | |
Start network speed test (used before room entry) | |
Stop network speed test | |
Get SDK version information | |
Set log output level | |
Enable/Disable console log printing | |
Enable/Disable local log compression | |
Set local log storage path | |
Set log callback | |
Display dashboard | |
Set dashboard margin | |
Call experimental APIs | |
Enable or disable private encryption of media streams |
TRTCCloud sharedInstance | (Context context) |
Param | DESC |
context | It is only applicable to the Android platform. The SDK internally converts it into the ApplicationContext of Android to call the Android system API. |
delete ITRTCCloud*
, a compilation error will occur. Please use destroyTRTCCloud
to release the object pointer. getTRTCShareInstance()
API. getTRTCShareInstance(void *context)
API.void addListener |
void setListenerHandler | (Handler listenerHandler) |
listenerHandler
, the SDK will use MainQueue
as the queue for driving TRTCCloudListener event callbacks by default. listenerHandler
attribute, all callback functions in TRTCCloudListener will be driven by MainQueue
.Param | DESC |
listenerHandler | |
listenerHandler
, please do not manipulate the UI in the TRTCCloudListener callback function; otherwise, thread safety issues will occur.void enterRoom | |
| int scene) |
result
parameter will be a positive number ( result
> 0), indicating the time in milliseconds (ms) between function call and room entry. result
parameter will be a negative number ( result
< 0), indicating the TXLiteAVError for room entry failure.Param | DESC |
param | Room entry parameter, which is used to specify the user's identity, role, authentication credentials, and other information. For more information, please see TRTCParams. |
scene | Application scenario, which is used to specify the use case. The same TRTCAppScene should be configured for all users in the same room. |
scene
is specified as TRTCAppSceneLIVE or TRTCAppSceneVoiceChatRoom, you must use the role
field in TRTCParams to specify the role of the current user in the room. scene
should be configured for all users in the same room. onExitRoom()
callback in TRTCCloudListener to notify you. onExitRoom()
callback, so as to avoid the problem of the camera or mic being occupied.void switchRole | (int role) |
anchor
and audience
. role
field in TRTCParams during room entry to specify the user role in advance or use the switchRole
API to switch roles after room entry.Param | DESC |
role | Role, which is anchor by default: TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room. TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users. |
scene
you specify in enterRoom is TRTC_APP_SCENE_VIDEOCALL or TRTC_APP_SCENE_AUDIOCALL, please do not call this API.void switchRole | (int role |
| final String privateMapKey) |
anchor
and audience
. role
field in TRTCParams during room entry to specify the user role in advance or use the switchRole
API to switch roles after room entry.Param | DESC |
privateMapKey | Permission credential used for permission control. If you want only users with the specified userId values to enter a room or push streams, you need to use privateMapKey to restrict the permission. We recommend you use this parameter only if you have high security requirements. For more information, please see Enabling Advanced Permission Control. |
role | Role, which is anchor by default: TRTCRoleAnchor: anchor, who can publish their audio/video streams. Up to 50 anchors are allowed to publish streams at the same time in one room. TRTCRoleAudience: audience, who cannot publish their audio/video streams, but can only watch streams of anchors in the room. If they want to publish their streams, they need to switch to the "anchor" role first through switchRole. One room supports an audience of up to 100,000 concurrent online users. |
scene
you specify in enterRoom is TRTCAppSceneVideoCall or TRTCAppSceneAudioCall, please do not call this API.void switchRoom |
audience
, calling this API is equivalent to exitRoom
(current room) + enterRoom
(new room). anchor
, the API will retain the current audio/video publishing status while switching the room; therefore, during the room switch, camera preview and sound capturing will not be interrupted. switchRoom
can get better smoothness and use less code than exitRoom + enterRoom
. onSwitchRoom(errCode, errMsg)
in TRTCCloudListener.Param | DESC |
config |
config
parameter contains both roomId
and strRoomId
parameters. You should pay special attention as detailed below when specifying these two parameters: strRoomId
, then set roomId
to 0. If both are specified, roomId
will be used. strRoomId
or roomId
at the same time. They cannot be mixed; otherwise, there will be many unexpected bugs.void ConnectOtherRoom | (String param) |
connectOtherRoom()
to successfully call anchor B in room "102": onRemoteUserEnterRoom(B)
and onUserVideoAvailable(B,true)
event callbacks of anchor B; that is, all users in room "101" can subscribe to the audio/video streams of anchor B. onRemoteUserEnterRoom(A)
and onUserVideoAvailable(A,true)
event callbacks of anchor A; that is, all users in room "102" can subscribe to the audio/video streams of anchor A.JSONObject jsonObj = new JSONObject();jsonObj.put("roomId", 102);jsonObj.put("userId", "userB");trtc.ConnectOtherRoom(jsonObj.toString());
roomId
in JSON with strRoomId
, such as {"strRoomId": "102", "userId": "userB"}JSONObject jsonObj = new JSONObject();jsonObj.put("strRoomId", "102");jsonObj.put("userId", "userB");trtc.ConnectOtherRoom(jsonObj.toString());
Param | DESC |
param | You need to pass in a string parameter in JSON format: roomId represents the room ID in numeric format, strRoomId represents the room ID in string format, and userId represents the user ID of the target anchor. |
onDisconnectOtherRoom()
callback in TRTCCloudDelegate.void setDefaultStreamRecvMode | (boolean autoRecvAudio |
| boolean autoRecvVideo) |
startRemoteView
API).Param | DESC |
autoRecvAudio | true: automatic subscription to audio; false: manual subscription to audio by calling muteRemoteAudio(false) . Default value: true |
autoRecvVideo | true: automatic subscription to video; false: manual subscription to video by calling startRemoteView . Default value: true |
TRTCCloud
was originally designed to work in the singleton mode, which limited the ability to watch concurrently in multiple rooms. TRTCCloud
instances, so that you can enter multiple different rooms at the same time to listen/watch audio/video streams. TRTCCloud
instances will be limited.//In the small room that needs interaction, enter the room as an anchor and push audio and video streamsTRTCCloud mainCloud = TRTCCloud.sharedInstance(mContext);TRTCCloudDef.TRTCParams mainParams = new TRTCCloudDef.TRTCParams();//Fill your paramsmainParams.role = TRTCCloudDef.TRTCRoleAnchor;mainCloud.enterRoom(mainParams, TRTCCloudDef.TRTC_APP_SCENE_LIVE);//...mainCloud.startLocalPreview(true, videoView);mainCloud.startLocalAudio(TRTCCloudDef.TRTC_AUDIO_QUALITY_DEFAULT);//In the large room that only needs to watch, enter the room as an audience and pull audio and video streamsTRTCCloud subCloud = mainCloud.createSubCloud();TRTCCloudDef.TRTCParams subParams = new TRTCCloudDef.TRTCParams();//Fill your paramssubParams.role = TRTCCloudDef.TRTCRoleAudience;subCloud.enterRoom(subParams, TRTCCloudDef.TRTC_APP_SCENE_LIVE);//...subCloud.startRemoteView(userId, TRTCCloudDef.TRTC_VIDEO_STREAM_TYPE_BIG, view);//...//Exit from new room and release it.subCloud.exitRoom();mainCloud.destroySubCloud(subCloud);
roomId
values by using the same userId
. userId
to enter the same room with a specified roomId
. TRTCCloud
instances at the same time, and can also call APIs related to local audio/video in the sub instance. But need to pay attention to: TRTCCloud
subinstancevoid destroySubCloud |
Param | DESC |
subCloud | |
void startPublishMediaStream | |
| |
|
Param | DESC |
config | The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig. |
params | The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we also recommend you set this parameter. For details, see TRTCStreamEncoderParam. |
target | The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget. |
target
. You will be charged only once for transcoding even if you relay to multiple CDNs.void updatePublishMediaStream | (final String taskId |
| |
| |
|
Param | DESC |
config | The On-Cloud MixTranscoding settings. This parameter is invalid in the relay-to-CDN mode. It is required if you transcode and publish the stream to a CDN or to a TRTC room. For details, see TRTCStreamMixingConfig. |
params | The encoding settings. This parameter is required if you transcode and publish the stream to a CDN or to a TRTC room. If you relay to a CDN without transcoding, to improve the relaying stability and playback compatibility, we recommend you set this parameter. For details, see TRTCStreamEncoderParam. |
target | The publishing destination. You can relay the stream to a CDN (after transcoding or without transcoding) or transcode and publish the stream to a TRTC room. For details, see TRTCPublishTarget. |
taskId |
void stopPublishMediaStream | (final String taskId) |
Param | DESC |
taskId |
taskId
is left empty, the TRTC backend will end all tasks you started through startPublishMediaStream. You can leave it empty if you have started only one task or want to stop all publishing tasks started by you.void startLocalPreview | (boolean frontCamera |
| TXCloudVideoView view) |
enterRoom
, the SDK will only enable the camera and wait until enterRoom
is called before starting push. enterRoom
, the SDK will enable the camera and automatically start pushing the video stream. onCameraDidReady
callback in TRTCCloudListener.Param | DESC |
frontCamera | true: front camera; false: rear camera |
view | Control that carries the video image |
BeautyManager
before going live, you can: startLocalPreview
before calling enterRoom
startLocalPreview
and muteLocalVideo(true)
after calling enterRoom
void updateLocalView | (TXCloudVideoView view) |
void muteLocalVideo | (int streamType |
| boolean mute) |
startLocalPreview/stopLocalPreview
when TRTCVideoStreamTypeBig is specified, but has higher performance and response speed. startLocalPreview/stopLocalPreview
APIs need to enable/disable the camera, which are hardware device-related operations, so they are very time-consuming. muteLocalVideo
only needs to pause or allow the data stream at the software level, so it is more efficient and more suitable for scenarios where frequent enabling/disabling are needed. onUserVideoAvailable(userId, false)
callback notification. onUserVideoAvailable(userId, true)
callback notification.Param | DESC |
mute | true: pause; false: resume |
streamType | Specify for which video stream to pause (or resume). Only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported |
void setVideoMuteImage | (Bitmap image |
| int fps) |
muteLocalVideo(true)
to pause the local video image, you can set a placeholder image by calling this API. Then, other users in the room will see this image instead of a black screen.Param | DESC |
fps | Frame rate of the placeholder image. Minimum value: 5. Maximum value: 10. Default value: 5 |
image | Placeholder image. A null value means that no more video stream data will be sent after muteLocalVideo . The default value is null. |
void startRemoteView | (String userId |
| int streamType |
| TXCloudVideoView view) |
userId
and render it to the rendering control specified by the view
parameter. You can set the display mode of the video image through setRemoteRenderParams. userId
of a user who has a video stream in the room, you can directly call startRemoteView
to subscribe to the user's video image. enterRoom
.Param | DESC |
streamType | Video stream type of the userId specified for watching: HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall (the remote user should enable dual-channel encoding through enableEncSmallVideoStream for this parameter to take effect) Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
view | Rendering control that carries the video image |
userId
at the same time, but does not support watching the big image and small image at the same time. userId
enables dual-channel encoding through enableEncSmallVideoStream can the user's small image be viewed. userId
does not exist, the SDK will switch to the big image of the user by default.void updateRemoteView | (String userId |
| int streamType |
| TXCloudVideoView view) |
Param | DESC |
streamType | Type of the stream for which to set the preview window (only TRTCVideoStreamTypeBig and TRTCVideoStreamTypeSub are supported) |
userId | ID of the specified remote user |
view | Control that carries the video image |
void stopRemoteView | (String userId |
| int streamType) |
Param | DESC |
streamType | Video stream type of the userId specified for watching: HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
void muteRemoteVideoStream | (String userId |
| int streamType |
| boolean mute) |
Param | DESC |
mute | Whether to pause receiving |
streamType | Specify for which video stream to pause (or resume): HD big image: TRTCVideoStreamTypeBig Smooth small image: TRTCVideoStreamTypeSmall Substream image (usually used for screen sharing): TRTCVideoStreamTypeSub |
userId | ID of the specified remote user |
void muteAllRemoteVideoStreams | (boolean mute) |
Param | DESC |
mute | Whether to pause receiving |
void setVideoEncoderParam |
Param | DESC |
param | It is used to set relevant parameters for the video encoder. For more information, please see TRTCVideoEncParam. |
void setNetworkQosParam |
Param | DESC |
param | It is used to set relevant parameters for network quality control. For details, please refer to TRTCNetworkQosParam. |
void setLocalRenderParams |
Param | DESC |
params |
void setRemoteRenderParams | (String userId |
| int streamType |
|
Param | DESC |
params | |
streamType | It can be set to the primary stream image (TRTCVideoStreamTypeBig) or substream image (TRTCVideoStreamTypeSub). |
userId | ID of the specified remote user |
int enableEncSmallVideoStream | (boolean enable |
|
Param | DESC |
enable | Whether to enable small image encoding. Default value: false |
smallVideoEncParam | Video parameters of small image stream |
int setRemoteVideoStreamType | (String userId |
| int streamType) |
Param | DESC |
streamType | Video stream type, i.e., big image or small image. Default value: big image |
userId | ID of the specified remote user |
void snapshotVideo | (String userId |
| int streamType |
| int sourceType |
|
Param | DESC |
sourceType | Video image source, which can be the video stream image (TRTCSnapshotSourceTypeStream, generally in higher definition) 、the video rendering image (TRTCSnapshotSourceTypeView) or the capture picture (TRTCSnapshotSourceTypeCapture).The captured picture screenshot will be clearer. |
streamType | Video stream type, which can be the primary stream image (TRTCVideoStreamTypeBig, generally for camera) or substream image (TRTCVideoStreamTypeSub, generally for screen sharing) |
userId | User ID. A null value indicates to screencapture the local video. |
void setPerspectiveCorrectionPoints | (String userId |
| PointF[] srcPoints |
| PointF[] dstPoints) |
Param | DESC |
dstPoints | The coordinates of the four vertices of the target corrected area should be passed in the order of top-left, bottom-left, top-right, bottom-right. All coordinates need to be normalized to the [0,1] range based on the render view width and height, or null to stop perspective correction of the corresponding stream. |
srcPoints | The coordinates of the four vertices of the original stream image area should be passed in the order of top-left, bottom-left, top-right, bottom-right. All coordinates need to be normalized to the [0,1] range based on the render view width and height, or null to stop perspective correction of the corresponding stream. |
userId | userId which corresponding to the target stream. If null value is specified, it indicates that the function is applied to the local stream. |
void setGravitySensorAdaptiveMode | (int mode) |
Param | DESC |
mode | Gravity sensing mode, see TRTC_GRAVITY_SENSOR_ADAPTIVE_MODE_DISABLE、TRTC_GRAVITY_SENSOR_ADAPTIVE_MODE_FILL_BY_CENTER_CROP and TRTC_GRAVITY_SENSOR_ADAPTIVE_MODE_FIT_WITH_BLACK_BORDER for details, default value: TRTC_GRAVITY_SENSOR_ADAPTIVE_MODE_DISABLE. |
void startLocalAudio | (int quality) |
Param | DESC |
quality | Sound quality TRTC_AUDIO_QUALITY_SPEECH - Smooth: sample rate: 16 kHz; mono channel; audio bitrate: 16 Kbps. This is suitable for audio call scenarios, such as online meeting and audio call. TRTC_AUDIO_QUALITY_DEFAULT - Default: sample rate: 48 kHz; mono channel; audio bitrate: 50 Kbps. This is the default sound quality of the SDK and recommended if there are no special requirements. TRTC_AUDIO_QUALITY_MUSIC - HD: sample rate: 48 kHz; dual channel + full band; audio bitrate: 128 Kbps. This is suitable for scenarios where Hi-Fi music transfer is required, such as online karaoke and music live streaming. |
void muteLocalAudio | (boolean mute) |
muteLocalAudio(true)
does not release the mic permission; instead, it continues to send mute packets with extremely low bitrate. muteLocalAudio
instead of stopLocalAudio
is recommended in scenarios where the requirement for recording file quality is high.Param | DESC |
mute | true: mute; false: unmute |
void muteRemoteAudio | (String userId |
| boolean mute) |
Param | DESC |
mute | true: mute; false: unmute |
userId | ID of the specified remote user |
false
after room exit (exitRoom).void muteAllRemoteAudio | (boolean mute) |
Param | DESC |
mute | true: mute; false: unmute |
false
after room exit (exitRoom).void setAudioRoute | (int route) |
Param | DESC |
route | Audio route, i.e., whether the audio is output by speaker or receiver. Default value: TRTC_AUDIO_ROUTE_SPEAKER |
void setRemoteAudioVolume | (String userId |
| int volume) |
setRemoteAudioVolume(userId, 0)
.Param | DESC |
userId | ID of the specified remote user |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void setAudioCaptureVolume | (int volume) |
Param | DESC |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void setAudioPlayoutVolume | (int volume) |
Param | DESC |
volume | Volume. 100 is the original volume. Value range: [0,150]. Default value: 100 |
void enableAudioVolumeEvaluation | (boolean enable |
|
Param | DESC |
enable | Whether to enable the volume prompt. It’s disabled by default. |
params |
startLocalAudio
.int startAudioRecording |
stopAudioRecording
before room exit, it will be automatically stopped after room exit.Param | DESC |
param |
void startLocalRecording |
Param | DESC |
params |
void setRemoteAudioParallelParams | (TRTCCloudDef.TRTCAudioParallelParams params) |
Param | DESC |
params | Audio parallel parameter. For more information, please see TRTCAudioParallelParams |
void enable3DSpatialAudioEffect | (boolean enabled) |
Param | DESC |
enabled | Whether to enable 3D spatial effect. It’s disabled by default. |
void updateSelf3DSpatialPosition | (int[] position |
| float[] axisForward |
| float[] axisRight |
| float[] axisUp) |
Param | DESC |
axisForward | The unit vector of the forward axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
axisRight | The unit vector of the right axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
axisUp | The unit vector of the up axis of user coordinate system. The three values represent the forward, right and up coordinate values in turn. |
position | The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn. |
void updateRemote3DSpatialPosition | (String userId |
| int[] position) |
Param | DESC |
position | The coordinate of self in the world coordinate system. The three values represent the forward, right and up coordinate values in turn. |
userId | ID of the specified remote user. |
void set3DSpatialReceivingRange | (String userId |
| int range) |
Param | DESC |
range | Maximum attenuation range of the audio stream. |
userId | ID of the specified user. |
void setWatermark | (Bitmap image |
| int streamType |
| float x |
| float y |
| float width) |
rect
parameter, which is a quadruple in the format of (x, y, width, height). rect
parameter is set to (0.1, 0.1, 0.2, 0.0),Param | DESC |
image | Watermark image, which must be a PNG image with transparent background |
rect | Unified coordinates of the watermark relative to the encoded resolution. Value range of x , y , width , and height : 0–1. |
streamType | Specify for which image to set the watermark. For more information, please see TRTCVideoStreamType. |
streamType
set to different values. TXAudioEffectManager
is a sound effect management API, through which you can implement the following features: isShortFile
parameter to true
).void startScreenCapture | (int streamType |
| |
|
Param | DESC |
encParams | Encoding parameters. For more information, please see TRTCCloudDef#TRTCVideoEncParam. If encParams is set to null , the SDK will automatically use the previously set encoding parameter. |
shareParams | For more information, please see TRTCCloudDef#TRTCScreenShareParams. You can use the floatingView parameter to pop up a floating window (you can also use Android's WindowManager parameter to configure automatic pop-up). |
void setSubStreamEncoderParam |
Param | DESC |
param |
void enableCustomVideoCapture | (int streamType |
| boolean enable) |
Param | DESC |
enable | Whether to enable. Default value: false |
streamType | Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image). |
void sendCustomVideoData | (int streamType |
|
Param | DESC |
frame | Video data. If the memory-based delivery scheme is used, please set the data field; if the video memory-based delivery scheme is used, please set the TRTCTexture field. For more information, please see com::tencent::trtc::TRTCCloudDef::TRTCVideoFrame TRTCVideoFrame. |
streamType | Specify video stream type (TRTCVideoStreamTypeBig: HD big image; TRTCVideoStreamTypeSub: substream image). |
timestamp
value of a video frame immediately after capturing it, so as to achieve the best audio/video sync effect.void enableCustomAudioCapture | (boolean enable) |
Param | DESC |
enable | Whether to enable. Default value: false |
void sendCustomAudioData |
TRTCAudioFrameFormatPCM
.Param | DESC |
frame | Audio data |
void enableMixExternalAudioFrame | (boolean enablePublish |
| boolean enablePlayout) |
Param | DESC |
enablePlayout | Whether the mixed audio track should be played back locally. Default value: false |
enablePublish | Whether the mixed audio track should be played back remotely. Default value: false |
enablePublish
and enablePlayout
as false
, the custom audio track will be completely closed.int mixExternalAudioFrame |
50
is returned, it indicates that the buffer pool has 50 ms of audio data. As long as you call this API again within 50 ms, the SDK can make sure that continuous audio data is mixed. 100
or greater, you can wait after an audio frame is played to call the API again. If the value returned is smaller than 100
, then there isn’t enough data in the buffer pool, and you should feed more audio data into the SDK until the data in the buffer pool is above the safety level. data
: audio frame buffer. Audio frames must be in PCM format. Each frame can be 5-100 ms (20 ms is recommended) in duration. Assume that the sample rate is 48000, and sound channels mono-channel. Then the frame size would be 48000 x 0.02s x 1 x 16 bit = 15360 bit = 1920 bytes. sampleRate
: sample rate. Valid values: 16000, 24000, 32000, 44100, 48000 channel
: number of sound channels (if dual-channel is used, data is interleaved). Valid values: 1
(mono-channel); 2
(dual channel) timestamp
: timestamp (ms). Set it to the timestamp when audio frames are captured, which you can obtain by calling generateCustomPTS after getting an audio frame.Param | DESC |
frame | Audio data |
0
or greater, the value represents the current size of the buffer pool; if the value returned is smaller than 0
, it means that an error occurred. -1
indicates that you didn’t call enableMixExternalAudioFrame to enable custom audio tracks.void setMixExternalAudioVolume | (int publishVolume |
| int playoutVolume) |
Param | DESC |
playoutVolume | set the play volume,from 0 to 100, -1 means no change |
publishVolume | set the publish volume,from 0 to 100, -1 means no change |
timestamp
field in TRTCVideoFrame or TRTCAudioFrame.int setLocalVideoProcessListener | (int pixelFormat |
| int bufferType |
|
listener
you set and use them for further processing by a third-party beauty filter component. Then, the SDK will encode and send the processed video frames.Param | DESC |
bufferType | Specify the format of the data called back. Currently, it supports: TRTC_VIDEO_BUFFER_TYPE_TEXTURE: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_Texture_2D. TRTC_VIDEO_BUFFER_TYPE_BYTE_BUFFER: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420. TRTC_VIDEO_BUFFER_TYPE_BYTE_ARRAY: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420. |
listener | |
pixelFormat | Specify the format of the pixel called back. Currently, it supports: TRTC_VIDEO_PIXEL_FORMAT_Texture_2D: video memory-based texture scheme. TRTC_VIDEO_PIXEL_FORMAT_I420: memory-based data scheme. |
int setLocalVideoRenderListener | (int pixelFormat |
| int bufferType |
|
pixelFormat
specifies the format of the data called back. Currently, Texture2D, I420, and RGBA formats are supported. bufferType
specifies the buffer type. BYTE_BUFFER
is suitable for the JNI layer, while BYTE_ARRAY
can be used in direct operations at the Java layer.Param | DESC |
bufferType | Specify the data structure of the video frame: TRTC_VIDEO_BUFFER_TYPE_TEXTURE: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_Texture_2D. TRTC_VIDEO_BUFFER_TYPE_BYTE_BUFFER: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420 or TRTC_VIDEO_PIXEL_FORMAT_RGBA. TRTC_VIDEO_BUFFER_TYPE_BYTE_ARRAY: suitable when pixelFormat is set to TRTC_VIDEO_PIXEL_FORMAT_I420 or TRTC_VIDEO_PIXEL_FORMAT_RGBA. |
listener | Callback of custom video rendering. The callback is returned once for each video frame |
pixelFormat | Specify the format of the video frame, such as: TRTC_VIDEO_PIXEL_FORMAT_Texture_2D: OpenGL texture format, which is suitable for GPU processing and has a high processing efficiency. TRTC_VIDEO_PIXEL_FORMAT_I420: standard I420 format, which is suitable for CPU processing and has a poor processing efficiency. TRTC_VIDEO_PIXEL_FORMAT_RGBA: RGBA format, which is suitable for CPU processing and has a poor processing efficiency. |
int setRemoteVideoRenderListener | (String userId |
| int pixelFormat |
| int bufferType |
|
pixelFormat
specifies the format of the called back data, such as NV12, I420, and 32BGRA. bufferType
specifies the buffer type. PixelBuffer
has the highest efficiency, while NSData
makes the SDK perform a memory conversion internally, which will result in extra performance loss.Param | DESC |
bufferType | Specify video data structure type. |
listener | listen for custom rendering |
pixelFormat | Specify the format of the pixel called back |
userId | ID of the specified remote user |
startRemoteView(nil)
needs to be called to get the video stream of the remote user ( view
can be set to nil
for this end); otherwise, there will be no data called back.void setAudioFrameListener |
int setCapturedAudioFrameCallbackFormat |
Param | DESC |
format | Audio data callback format |
int setLocalProcessedAudioFrameCallbackFormat |
Param | DESC |
format | Audio data callback format |
int setMixedPlayAudioFrameCallbackFormat |
Param | DESC |
format | Audio data callback format |
void enableCustomAudioRendering | (boolean enable) |
Param | DESC |
enable | Whether to enable custom audio playback. It’s disabled by default. |
void getCustomAudioRenderingFrame |
sampleRate
: sample rate (required). Valid values: 16000, 24000, 32000, 44100, 48000 channel
: number of sound channels (required). 1
: mono-channel; 2
: dual-channel; if dual-channel is used, data is interleaved. data
: the buffer used to get audio data. You need to allocate memory for the buffer based on the duration of an audio frame.Param | DESC |
audioFrame | Audio frames |
sampleRate
and channel
in audioFrame
, and allocate memory for one frame of audio in advance. sampleRate
and channel
.boolean sendCustomCmdMsg | (int cmdID |
| byte[] data |
| boolean reliable |
| boolean ordered) |
onRecvCustomCmdMsg
callback in TRTCCloudListener.Param | DESC |
cmdID | Message ID. Value range: 1–10 |
data | Message to be sent. The maximum length of one single message is 1 KB. |
ordered | Whether orderly sending is enabled, i.e., whether the data packets should be received in the same order in which they are sent; if so, a certain delay will be caused. |
reliable | Whether reliable sending is enabled. Reliable sending can achieve a higher success rate but with a longer reception delay than unreliable sending. |
reliable
and ordered
must be set to the same value ( true
or false
) and cannot be set to different values currently. cmdID
values for messages of different types. This can reduce message delay when orderly sending is required.boolean sendSEIMsg | (byte[] data |
| int repeatCount) |
onRecvSEIMsg
callback in TRTCCloudListener.Param | DESC |
data | Data to be sent, which can be up to 1 KB (1,000 bytes) |
repeatCount | Data sending count |
sendCustomCmdMsg
). sendCustomCmdMsg
). If a large amount of data is sent, the video bitrate will increase, which may reduce the video quality or even cause lagging. sendCustomCmdMsg
). repeatCount
> 1), the data will be inserted into subsequent repeatCount
video frames in a row for sending, which will increase the video bitrate. repeatCount
is greater than 1, the data will be sent for multiple times, and the same message may be received multiple times in the onRecvSEIMsg
callback; therefore, deduplication is required.int startSpeedTest |
Param | DESC |
params | speed test options |
void setLogLevel | (int level) |
Param | DESC |
level |
void setConsoleEnabled | (boolean enabled) |
Param | DESC |
enabled | Specify whether to enable it, which is disabled by default |
void setLogCompressEnabled | (boolean enabled) |
Param | DESC |
enabled | Specify whether to enable it, which is enabled by default |
void setLogDirPath | (String path) |
%appdata%/liteav/log
. sandbox Documents/log
. /app directory/files/log/liteav/
.Param | DESC |
path | Log storage path |
void setLogListener |
void showDebugView | (int showType) |
Param | DESC |
showType | 0: does not display; 1: displays lite edition (only with audio/video information); 2: displays full edition (with audio/video information and event information). |
public TRTCViewMargin | (float leftMargin |
| float rightMargin |
| float topMargin |
| float bottomMargin) |
showDebugView
for it to take effect.Param | DESC |
margin | Inner margin of the dashboard. It should be noted that this is based on the percentage of parentView . Value range: 0–1 |
userId | User ID |
String callExperimentalAPI | (String jsonStr) |
int enablePayloadPrivateEncryption | (boolean enabled |
|
Param | DESC |
config | Configure the algorithm and key for private encryption of media streams, please see TRTCPayloadPrivateEncryptionConfig. |
enabled | Whether to enable media stream private encryption. |
Was this page helpful?