The TXLivePusher SDK is mainly used to publish streams for LEB (ultra-low latency streaming). It can publish audio and video the browser captures from the camera, screen, or a local media file to live streaming servers via WebRTC.
Note:
With WebRTC, each domain name can be used for the publishing of up to concurrent 100 streams by default. If you want to publish more streams, please submit a ticket. Basics
Below are some basics you need to know before integrating the SDK.
Splicing publishing URLs
To use Tencent Cloud live streaming services, you need to splice publishing URLs in the format required by Tencent Cloud, which consists of four parts.
An authentication key is not required. You can enable publishing authentication if you need hotlink protection. For details, please see Splicing CSS URLs. Browser support
Publishing for LEB relies on WebRTC and therefore can only be used on OS and browsers that support WebRTC.
The audio/video capturing feature is poorly supported on mobile browsers. For example, mobile browsers do not support screen recording, and only iOS 14.3 and above allow requesting camera access. Therefore, the publishing SDK is mainly used on desktop browsers. The latest version of Chrome, Firefox, and Safari all support publishing for LEB.
To publish streams on mobile browsers, use the MLVB SDK. SDK Integration
Step 1. Prepare the page
Add an initialization script to the (desktop) page from which streams are to be published.
<script src="https://video.sdk.qcloudecdn.com/web/TXLivePusher-2.1.0.min.js" charset="utf-8"></script>
Note:
The script needs to be imported into the body
part of the HTML code. If it is imported into the head
part, an error will be reported.
Step 2. Add a container to the HTML page
Add a player container to the section of the page where local video is to be played. This is achieved by adding a div and giving it a name, for example, id_local_video
. Local video will be rendered in the container. To adjust the size of the container, style the div using CSS.
<div id="id_local_video" style="width:100%;height:500px;display:flex;align-items:center;justify-content:center;"></div>
Step 3. Publish streams
1. Generate an instance of the publishing SDK:
Generate an instance of the global object TXLivePusher
. All subsequent operations will be performed via the instance.
var livePusher = new TXLivePusher();
2. Specify the local video player container:
Specify the div for the local video player container, which is where audio and video captured by the browser will be rendered.
livePusher.setRenderView('id_local_video');
Note:
The video element generated via setRenderView
is unmuted by default. To mute video, obtain the video element using the code below.
document.getElementById('id_local_video').getElementsByTagName('video')[0].muted = true;
3. Set audio/video quality:
Audio/video quality should be set before capturing. You can specify quality parameters if the default settings do not meet your requirements.
livePusher
.
setVideoQuality
(
'720p'
)
;
livePusher
.
setAudioQuality
(
'standard'
)
;
livePusher
.
setProperty
(
'setVideoFPS'
,
25
)
;
4. Capture streams:
You can capture streams from the camera, mic, screen and local media files. If capturing is successful, the player container will start playing the audio/video captured.
livePusher.startCamera();
livePusher.startMicrophone();
5. Publish streams:
Pass in the LEB publishing URL to start publishing streams. For the format of publishing URLs, please see Splicing CSS URLs. You need to replace the prefix rtmp://
with webrtc://
. livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');
Note:
Before publishing, make sure that audio/video streams are captured successfully, or you will fail to call the publishing API. You can use the code below to publish streams automatically after audio/video is captured, that is, after the callback for capturing the first audio or video frame is received. If both audio and video are captured, publishing starts only after both the callback for capturing the first audio frame and that for the first video frame are received.
var hasVideo = false;
var hasAudio = false;
var isPush = false;
livePusher.setObserver({
onCaptureFirstAudioFrame: function() {
hasAudio = true;
if (hasVideo && !isPush) {
isPush = true;
livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');
}
},
onCaptureFirstVideoFrame: function() {
hasVideo = true;
if (hasAudio && !isPush) {
isPush = true;
livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');
}
}
});
6. Stop publishing:
7. Stop capturing audio and video:
livePusher.stopCamera();
livePusher.stopMicrophone();
Advanced Features
Compatibility
The SDK provides a static method to check whether a browser supports WebRTC.
TXLivePusher.checkSupport().then(function(data) {
if (data.isWebRTCSupported) {
console.log('WebRTC Support');
} else {
console.log('WebRTC Not Support');
}
if (data.isH264EncodeSupported) {
console.log('H264 Encode Support');
} else {
console.log('H264 Encode Not Support');
}
});
Event callbacks
The SDK supports callback event notifications. You can set an observer to receive callbacks of the SDK’s status and WebRTC-related statistics. For details, see TXLivePusherObserver. livePusher.setObserver({
onWarning: function(code, msg) {
console.log(code, msg);
},
onPushStatusUpdate: function(status, msg) {
console.log(status, msg);
},
onStatisticsUpdate: function(data) {
console.log('video fps is ' + data.video.framesPerSecond);
}
});
Device management
You can use a device management instance to get the device list, switch devices, and perform other device-related operations.
var deviceManager = livePusher.getDeviceManager();
deviceManager.getDevicesList().then(function(data) {
data.forEach(function(device) {
console.log(device.deviceId, device.deviceName);
});
});
deviceManager.switchCamera('camera_device_id');
Was this page helpful?