Integration Method 1: Integrating Through API
Note:
If you need to enable the smart erase feature for a custom intelligent analysis template, you can contact us and provide the template ID, and Tencent Cloud MPS developers will configure and enable the smart erase feature for you.
1. Specifying an Erasure Scenario
ExtendedParameter specifies different parameters for different erasure scenario capabilities.
Note:
Currently, each request can only select one scenario capability and does not support multiple selections. If you need to use a combination of multiple capabilities, contact us for evaluation and support.
Scenario 1: Watermark Removal
If ExtendedParameter is not specified, the watermark removal scenario is used by default.
Currently, we support the recognition and removal of over 10 types of watermarks. For watermarks not within our supported range, we also offer customized training services, however, this will incur additional model training fees.
Scenario 2: Subtitle Removal + Voice Subtitle & Translation + Text To Speech (TTS) & Replacement
Description
Original subtitle removal, subtitle & voice translation, and multilingual subtitle & voice replacement can be accomplished in one go in this scenario, facilitating video dissemination overseas. It is widely applicable to short drama platforms, short video platforms, cross-border e-commerce, and independent media studios. By using this scenario capability, the input video will be processed in the following steps sequentially:
1. Recognize and remove the subtitles in the original video image;
2. Recognize the audio in the original video, generate subtitle files, translate the subtitles, and then:
2.1 Render the translated subtitles to the original video image;
2.2 Use TTS to replace the original video's audio track.
3. Generate a new video.
Note:
This scenario is a combination of multiple features. If you only need to use a specific feature, refer to other scenarios below.
Parameter
{\\"delogo\\":{\\"cluster_id\\":\\"gpu_pp\\",\\"CustomerAppId\\":\\"audio_subtitle_replace\\"}}
{\\"delogo\\":{\\"cluster_id\\":\\"gpu_pp\\",\\"CustomerAppId\\":\\"audio_subtitle_replace\\",\\"subtitle_param\\":{\\"translate_dst_language\\":\\"ja\\"}}}
Effect Example
Scenario 3: Specified Area Erasure
Description
The area specified by parameters is erased and a new video is generated.
Parameter
{\\"delogo\\":{\\"custom_objs\\":{\\"type\\":0,\\"time_objs\\":[{\\"objs\\":[{\\"type\\":1,\\"value\\":\\"customobjs\\",\\"score\\":99,\\"rect\\":{\\"lt_x\\":120,\\"lt_y\\":75,\\"rb_x\\":395,\\"rb_y\\":158}}]}]}}}
Effect Example
Scenario 4: Subtitle Extraction and Translation
Description
The subtitles in the video image are recognized, analyzed, and proofread and then translated into the specified language, and both the original and translated subtitle files are generated.
Parameter
{\\"delogo\\":{\\"CustomerAppId\\":\\"ocr_zimu\\",\\"subtitle_param\\":{\\"translate_dst_language\\":\\"\\"}}}
Effect Example
Scenario 5: Subtitle Removal
Description
The text at the lower middle part of the video image is automatically recognized and removed using AI models to generate a new video. This feature offers two editions:
Standard edition: It is generally recommended to choose the standard edition, as its subtitle removal effect is better.
STTN edition: It provides the same feature as the standard edition, but it is slightly faster, though the effect is not as good as the standard edition.
Standard Edition Parameter
{\\"delogo\\":{\\"cluster_id\\":\\"gpu_pp\\",\\"CustomerAppId\\":\\"zimu\\"}}
STTN Edition Parameter
{\\"delogo\\":{\\"cluster_id\\":\\"gpu_pp\\",\\"CustomerAppId\\":\\"zimu-sttn\\"}}
Effect Example
Scenario 6: Face Mosaic
Description
After faces in the video image are recognized, mosaic processing is applied to the faces and a new video is generated.
Parameter
{\\"delogo\\":{\\"CustomerAppId\\":\\"rennian_msk\\"}}
Effect Example
Scenario 7: Face Blur
Description
After faces in the video image are recognized, blur processing is applied to the faces and a new video is generated.
Parameter
{\\"delogo\\":{\\"CustomerAppId\\":\\"rennian\\"}}
Effect Example
Scenario 8: Face and License Plate Mosaic
Description
After faces and license plates in the video image are recognized, mosaic processing is applied to both the faces and license plates and a new video is generated.
Parameter
{\\"delogo\\":{\\"CustomerAppId\\":\\"rennian_chepai_msk\\"}}
{\\"delogo\\":{\\"CustomerAppId\\":\\"rennian_chepai_msk_v2\\"}}
Effect Example
2. Other API Parameters
Specifying the Output File Name
You can specify it through the output_patten parameter in the extended parameter.
The placeholder is {}
and supports task_type
and session_id. The default output is {task_type}-{session_id}
.
Sample:
{
"delogo": {
"CustomerAppId": "ocr_zimu",
"subtitle_param": {
"translate_dst_language": ""
},
"output_patten": "custom-{task_type}-{session_id}"
}
}
3. API Explorer Quick Verification
You can perform quick verification through API Explorer. After filling in relevant parameter information on the page, you can initiate an online API call. Note: API Explorer will automatically convert the format. You only need to enter the corresponding ExtendedParameter in JSON format without converting it to a string.
4. Querying Task Results
Task callbacks: When initiating an MPS task using ProcessMedia, you can set callback information through the TaskNotifyConfig
parameter. After the task is completed, the task results will be called back through the configured callback information. You can parse the event notification results through ParseNotification. Query via API: Use the TaskId
returned by ProcessMedia to call the DescribeTaskDetail API to query the task processing results. Parse the AiAnalysisResultSet
field under WorkflowTask
. Query via console: Log in to the console and go to VOD Processing Tasks, where the newly initiated tasks are displayed in the task list. When the subtask status is "Successful", you can go to COS Bucket > Output Bucket, find your output directory, and locate the files starting with delogo-
in the directory, which are the output files processed by smart erase.
Integration Method 2: Initiating a Task from Console (Zero Code)
Note:
When an erasure task is initiated from the console, the default scenario is watermark removal. For other erasure scenarios, use the API to specify an erasure scenario through parameters. For details, refer to the above Integration Method 1: Integrating Through API. 1. Creating a Task
1.1 Log in to the MPS console and click Create Task > Create VOD Processing Task. 1.2 Specify an input video file. Currently, the smart erase feature supports two input sources: Tencent Cloud Object Storage (COS) and URL download addresses. AWS S3 is currently not supported. 1.3 In the "Process Input File" step, add the Intelligent Analysis node.
In the intelligent analysis settings drawer that pops up, select the preset smart erase template (template ID: 24). If you need to enable the smart erase feature for a custom intelligent analysis template, you can contact us and provide the template ID, and Tencent Cloud MPS developers will configure and enable the smart erase feature for you.
Note:
The preset template (template ID: 24) defaults to the watermark removal scenario. For other erasure scenarios, refer to the above method 1 of integrating through API.
1.4 After specifying the save path for the output video after erasure, click Create to initiate the task.
2. Querying Task Results
3. Automatically Triggering a Task (Optional Capability)
If you require automatically performing smart erase according to the preset parameters after a video file is uploaded in the COS bucket, you can:
3.1 When creating a task, click Save The Orchestration, and configure parameters such as Trigger Bucket and Trigger Directory in the pop-up window.
3.2 Go to the VOD Orchestration list, find the new orchestration, and turn on the switch at Enable. Subsequently, any new video files added to the trigger directory will automatically initiate tasks according to the preset process and parameters of the orchestration, and the processed video files will be saved to the output path configured in the orchestration.
Note:
It takes 3-5 minutes for the orchestration to take effect after being enabled.
FAQs
What types of watermarks removal are supported?
This service uses AI technology to identify watermarks and and removes them. Currently, recognition and erasion of over a dozen types of watermarks are supported. For watermarks not included in our coverage, we offer personalized training services at an additional model training cost.
Does it charge for watermark-free videos?
It will also charge in this situation. This is due to the fact that even if the video is not watermarked, normal computational analysis is performed, consumping the computational resources.
Is live streaming supported?
Currently, only VOD files are supported by the external interface. For live streaming processing needs, please get in touch with the developer.
ck information will relay the task results.
Was this page helpful?