npm install tencentcloud-webar
import { ArSdk } from 'tencentcloud-webar';
<script charset="utf-8" src="https://webar-static.tencent-cloud.com/ar-sdk/resources/latest/webar-sdk.umd.js"></script><script>// Receive ArSdk class from window.ARconst { ArSdk } = window.AR;......</script>
// Get the authentication informationconst authData = {licenseKey: 'xxxxxxxxx',appId: 'xxx',authFunc: authFunc // For details, see “Configuring and Using a License - Signature”};const config = {module: {beautify: true, // Whether to enable the effect module, which offers beautification and makeup effects as well as stickerssegmentation: true // Whether to enable the keying module, which allows you to change the background},auth: authData, // The authentication informationcamera: { // Pass in the camera parameterswidth: 1280,height: 720},beautify: { // The effect parameters for initialization (optional)whiten: 0.1,dermabrasion: 0.3,eye: 0.2,chin: 0,lift: 0.1,shave: 0.2,……}}const sdk = new ArSdk(// Pass in a config object to initialize the SDKconfig)let effectList = [];let filterList = [];sdk.on('created', () => {// You can display the effect and filter list in the `created` callback. For details, see “SDK Integration - Parameters and APIs”.sdk.getEffectList({Type: 'Preset',Label: 'Makeup',}).then(res => {effectList = res});sdk.getCommonFilter().then(res => {filterList = res})})// Call `setBeautify`, `setEffect`, or `setFilter` in the `ready` callback// For details, see “SDK Integration - Configuring Effects”sdk.on('ready', () => {// Configure beautification effectssdk.setBeautify({whiten: 0.2});// Configure special effectssdk.setEffect({id: effectList[0].EffectId,intensity: 0.7});// Configure filterssdk.setFilter(filterList[0].EffectId, 0.5)})
camera
parameter of config
, the video data the SDK captures from the device’s camera will be used as the input. We also offer some basic device management APIs. For details, see Step 6. Control Devices.cameraReady
callback. Because the SDK hasn’t loaded the resources or completed initialization at this point, the player can only play the original video.sdk.on('cameraReady', async () => {// Initialize a player of the SDK. `my-dom-id` is the ID of the player’s container.const player = await sdk.initLocalPlayer('my-dom-id')// Play the videoawait player.play()})
ready
playback.sdk.on('ready', async () => {// Initialize a player of the SDK. `my-dom-id` is the ID of the player’s container.const player = await sdk.initLocalPlayer('my-dom-id')// Play the videoawait player.play()})
initLocalPlayer
is muted by default. If you unmute it, there may be echoes.sdk.getOutput()
API.initLocalPlayer
is integrated with the following APIs:API | Description | Request Parameter | Return Value |
play | Plays the video. | - | Promise; |
pause | Pauses the video. This does not stop the stream. You can resume the playback. | - | - |
stop | Stops the video. This stops the stream. | - | - |
mute | Mutes the video. | - | - |
unmute | Unmutes the video. | - | - |
setMirror | Sets whether to mirror the video. | true|false | - |
getVideoElement | Gets the built-in video object. | - | HTMLVideoElement |
destroy | Terminates the player. | - | - |
LocalPlayer
.camera.muteVideo
to disable video, playback will not start even if you call play
.camera.unmuteVideo
to enable video, the player will play the video automatically.
Therefore, if you specify camera
, you don’t need to manually configure localPlayer
.getOutput
to get the output stream.
After getting the MediaStream
, you can use a live streaming SDK (for example, TRTC web SDK or LEB web SDK) to publish the stream.const output = await sdk.getOutput()
getOutput
is MediaStream
.getOutput
is an async API. The output will be returned only after the SDK is initialized and has generated a stream.FPS
parameter to getOutput
to specify the output frame rate (for example, 15). If you do not pass this parameter, the original frame rate will be kept.sdk.camera
instance to enable and disable the camera or perform other camera operations.const output = await sdk.getOutput()// Your business logic// ...// `sdk.camera` will have been initialized after `getOutput`. You can get an instance directly.const cameraApi = sdk.camera// Get the device listconst devices = await cameraApi.getDevices()console.log(devices)// Disable the video track// cameraApi.muteVideo()// Enable the video track// cameraApi.unmuteVideo()// Change to a different camera by specifying the device ID (if there are multiple cameras)// await cameraApi.switchDevices('video', 'your-device-id')
sdk.camera
instance as soon as possible, you can get it in the cameraReady
callback.// Initialization parameters// ...const sdk = new ArSdk(config)let cameraApi;sdk.on('cameraReady', async () => {cameraApi = sdk.camera// Get the device listconst devices = await cameraApi.getDevices()console.log(devices)// Disable the video track// cameraApi.muteVideo()// Enable the video track// cameraApi.unmuteVideo()// Change to a different camera by specifying the device ID (if there are multiple cameras)// await cameraApi.switchDevices('video', 'your-device-id')})
camera
to control the built-in camera.API | Description | Request Parameter | Return Value |
getDevices | Gets all devices. | - | Promise<Array<MediaDeviceInfo>> |
switchDevice | Switches the device. | type:string, deviceId:string | Promise |
muteAudio | Mutes the current stream. | - | - |
unmuteAudio | Unmutes the current stream. | - | - |
muteVideo | Disables the video track of the camera stream. This does not stop the stream. | - | - |
unmuteVideo | Enables the video of the camera stream. | - | - |
stopVideo | Disables the camera. This stops the video stream, but the audio stream is not affected. | - | - |
restartVideo | Enables the camera. This API can only be called after stopVideo . | - | Promise |
stop | Disables the current camera and audio device. | - | - |
Was this page helpful?