You can use this integration mode if you want to apply effects to your own streams or want greater flexibility and control.
Step 1. Import the SDK
use npm package:
npm install tencentcloud-webar
import { ArSdk } from 'tencentcloud-webar';
you can also import the SDK using the following method:
<script charset="utf-8" src="https://webar-static.tencent-cloud.com/ar-sdk/resources/latest/webar-sdk.umd.js"></script>
<script>
// Receive T ArSdk class from window.AR
const { ArSdk } = window.AR;
......
</script>
Step 2. Initialize an instance
1. Initialize an SDK instance.
const authData = {
licenseKey: 'xxxxxxxxx',
appId: 'xxx',
authFunc: authFunc
};
const stream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: { width: w, height: h }
})
const config = {
module: {
beautify: true,
segmentation: true
},
auth: authData,
input: stream,
beautify: {
whiten: 0.1,
dermabrasion: 0.3,
eye: 0.2,
chin: 0,
lift: 0.1,
shave: 0.2
}
}
const sdk = new ArSdk(
config
)
Note:
The loading of the effect and segmentation modules takes time and consumes resources. You can enable only the module you need during initialization. A module not enabled will not be loaded or initialized.
2. For input
, you can also pass in string|HTMLImageElement
to process images.
const config = {
auth: authData,
input: 'https://xxx.png',
}
const sdk = new ArSdk(
config
)
sdk.on('created', () => {
sdk.getEffectList({
Type: 'Preset',
Label: 'Makeup',
}).then(res => {
effectList = res
});
sdk.getCommonFilter().then(res => {
filterList = res
})
})
sdk.on('ready', () => {
sdk.setBeautify({
whiten: 0.2
});
sdk.setEffect({
id: effectList[0].EffectId,
intensity: 0.7
});
sdk.setFilter(filterList[0].EffectId, 0.5)
})
Step 3. Play the stream
Call ArSdk.prototype.getOutput
to get the output stream.
The output streams you get in different callbacks vary slightly. Choose the one that fits your needs.
If you want to display a video image as quickly as possible, get and play the stream in the cameraReady
callback. Because the SDK hasn’t loaded the resources or completed initialization at this point, the original video will be played.
sdk.on('cameraReady', async () => {
const output = await ar.getOutput();
const video = document.createElement('video')
video.setAttribute('playsinline', '');
video.setAttribute('autoplay', '');
video.srcObject = output
document.body.appendChild(video)
video.play()
})
If you want to play the video after the SDK is initialized and effects are applied, get and play the output stream in the ready
playback.
sdk.on('ready', async () => {
const output = await ar.getOutput();
const video = document.createElement('video')
video.setAttribute('playsinline', '');
video.setAttribute('autoplay', '');
video.srcObject = output
document.body.appendChild(video)
video.play()
})
Step 4. Get the output
After getting the MediaStream
, you can use a live streaming SDK (for example, TRTC web SDK or LEB web SDK) to publish the stream.
const output = await sdk.getOutput()
Note:
If the input passed in is an image, a string-type data URL will be returned. Otherwise, MediaStream
will be returned.
The video track of the output stream is processed in real time by the Tencent Effect SDK. The audio track (if any) is kept.
getOutput
is an async API. The output will be returned only after the SDK is initialized and has generated a stream.
You can pass an FPS
parameter to getOutput
to specify the output frame rate (for example, 15). If you do not pass this parameter, the original frame rate will be kept.
You can call getOutput
multiple times to generate streams of different frame rates for different scenarios (for example, you can use a high frame rate for preview and a low frame rate for stream publishing).
Step 5. Configuring effects
Updating the Input Stream
If you want to feed a new input stream to the SDK after changing the device or enabling/disabling the camera, you don’t need to initialize the SDK again. Just call sdk.updateInputStream
to update the input stream.
The following code shows you how to use updateInputStream
to update the input stream after switching from the computer’s default camera to an external camera.
async function getVideoDeviceList(){
const devices = await navigator.mediaDevices.enumerateDevices()
const videoDevices = []
devices.forEach((device)=>{
if(device.kind === 'videoinput'){
videoDevices.push({
label: device.label,
id: device.deviceId
})
}
})
return videoDevices
}
async function initDom(){
const videoDeviceList = await getVideoDeviceList()
let dom = ''
videoDeviceList.forEach(device=>{
dom = `${dom}
<button id=${device.id} onclick='toggleVideo("${device.id}")'>${device.label}<nbutton>
`
})
const div = document.createElement('div');
div.id = 'container';
div.innerHTML = dom;
document.body.appendChild(div);
}
async function toggleVideo(deviceId){
const stream = await navigator.mediaDevices.getUserMedia({
video: {
deviceId,
width: 1280,
height: 720,
}
})
sdk.updateInputStream(stream)
}
initDom()
Pausing and Resuming Detection
You can call disable
and enable
to manually pause and resume detection. Pausing detection can reduce CPU usage.
<button id="disable">Disable detection</button>
<button id="enable">Enable detection</button>
disableButton.onClick = () => {
sdk.disable()
}
enableButton.onClick = () => {
sdk.enable()
}
Was this page helpful?