tencent cloud

Feedback

Video Splitting (Long Videos to Short Videos) Tutorial

Last updated: 2024-11-29 15:40:21
    Video splitting can segment a complete long video. For example, it can split complete news broadcast material into multiple news event videos, significantly enhancing the splitting quality of news and sports videos, promoting secondary creation, and saving labor and hardware costs. Video splitting supports processing offline videos and live streams. For details, see Processing Offline Videos and Processing Live Streams.

    Processing Offline Videos

    Integration Method 1: Initiating a Task via API

    Call the Media Processing Service (MPS) API, select AiAnalysisTask, set Definition to 27 (preset video splitting template), and enter extended parameters in ExtendedParameter for specific capabilities. For details, see Specifying a Splitting Scenario below.
    
    
    

    1. Specifying a Splitting Scenario

    Note:
    Several preset ExtendedParameter parameters are provided below.
    To ensure splitting effects, it is recommended that you contact us. We can confirm the specific parameters based on your video scenarios and provide ongoing optimization support.

    Scenario 1: News Splitting

    Description
    Characteristics such as the director's desk and "breaking news" in news videos are recognized to split the news. The output includes the split video segments, the cover image of each segment, and the start and end times of each segment.
    Parameter
    If ExtendedParameter is not specified, the news splitting scenario is used by default.
    Effect Example
    The original news (the 30-minute video on the left) has been split into multiple short videos of a few minutes each (the videos on the right).
    
    
    

    Scenario 2: Natural Language Processing (NLP) Splitting

    Description
    The text is extracted by recognizing the video's speech and text content, and the video is split into segments intelligently based on the text. The output includes the split video segments, the cover image of each segment, and the start and end times, title, and summary of each segment.
    Parameter
    Enter the following parameters in ExtendedParameter. For specific parameters, it is recommended to confirm offline. The output includes the split video segments, the cover image of each segment, and the start and end times, title, and summary of each segment.
    {"strip":{"type":"content"}}
    
    /*
    If you need to customize the summary-related parameters, refer to the following format for inputting parameters:
    {"strip":{"type":"content"},"des":{"need_ocr":true,"only_segment":0,"text_requirement":"Title within 20 characters, summary within 40 characters","dstlang": "en"}}
    */
    Refer to the table below for optional parameters in the "des" field:
    Parameter
    Required
    Type
    Description
    need_ocr
    No
    bool
    Whether to use Optical Character Recognition (OCR) to assist segmentation, True means enabled. The default value is False.
    If disabled, the system only recognizes the video's speech content to assist video segmentation; if enabled, it also recognizes the text content on the video image to assist video segmentation.
    only_segment
    No
    int
    Whether to only segment without generating a title and summary. The default value is 0.
    1: Only segment without generating a title and summary.
    0: Segment and generate a title and summary.
    text_requirement
    No
    string
    Requirements for generating a title and summary. For example, the character limit is "the title is within 20 characters and summary is within 40 characters".
    dstlang
    No
    string
    Title and summary language. The default value is "zh".
    "zh": Chinese.
    "en": English.
    Effect Example
    The original speech video (the video on the left) has been split into multiple short videos, each with a cover image, title, and summary.
    
    
    

    Scenario 3: Target Splitting

    Description
    Key frames where specified targets such as objects and characters appear in the video are recognized, and the corresponding segments are extracted. For example, for surveillance videos, only the segments with people appearing are extracted. The output includes the split video segments, the cover image of each segment, and the start and end times of each segment.
    Parameter
    Enter the following parameters in ExtendedParameter. For the specific targets to be detected, it is recommended to confirm offline.
    {"strip":{"type":"object","objects":["person"], "object_set":[91020415]}}
    Effect Example
    Customer case: Segments with people appearing are extracted from surveillance videos to reduce storage costs.
    
    
    

    2. API Explorer Quick Verification

    You can perform quick verification through API Explorer. After filling in relevant parameter information on the page, you can initiate an online API call.
    Note: API Explorer will automatically convert the format. You only need to enter the corresponding ExtendedParameter in JSON format without converting it to a string.
    
    
    

    3. Querying Task Results

    Task callbacks: When initiating an MPS task using ProcessMedia, you can set callback information through the TaskNotifyConfig parameter. After the task is completed, the task results will be called back through the configured callback information. You can parse the event notification results through ParseNotification.
    Query via the DescribeTaskDetail API:
    For tasks started with the API and a template as described in Integration Method 1 above, use the TaskId from ProcessMedia (for example: 24000022-WorkflowTask-b20a8exxxxxxx1tt110253) to parse AiAnalysisResultSet in WorkflowTask.
    For tasks started via ProcessMedia without a template but with a ScheduleId(the subsequent section on automatic task triggering explains how to create a schedule), the returned TaskId will include "ScheduleTask" (e.g. 24000022-ScheduleTask-774f101xxxxxxx1tt110253). In this scenario, use the TaskId to parse ActivityResultSet in ScheduleTask.
    For tasks initiated from the console, as described in Integration Method 2 below, go to Tasks -> VOD for the task ID and results. For some task results not currently previewable on the console, such as the titles and summaries of segmented outputs, you can parse theActivityResultSet in ScheduleTask in the DescribeTaskDetail API to obtain them.
    Query via console: Log in to the console and go to Tasks -> VOD, where the newly initiated tasks are displayed in the task list.
    
    
    
    When the subtask status is "Successful", you can go to COS Bucket > Output Bucket, find your output directory, and locate the files starting with strip- in the directory, which are the output files of video splitting (segmented videos and cover images).
    Note:
    Text content such as titles and summaries will not be output to the bucket, and it should be obtained through event callbacks or API queries.
    
    
    

    Integration Method 2: Initiating a Task from Console (Zero Code)

    Note:
    When a video splitting task is initiated from the console, the default scenario is news splitting. For other splitting scenarios, use the API to specify a splitting scenario through parameters. For details, refer to the above Integration Method 1: Initiating a Task via API.

    1. Creating a Task

    1.1 Log in to the MPS console and click Create Task > Create VOD Processing Task.
    
    
    
    1.2 Specify an input video file. Currently, the video splitting feature supports two input sources: Tencent Cloud Object Storage (COS) and URL download addresses. AWS S3 is currently not supported.
    1.3 In the "Process Input File" step, add the Intelligent Analysis node.
    
    
    
    In the intelligent analysis settings drawer that pops up, select the preset video splitting template (template ID: 27). If you need to enable the video splitting feature for a custom intelligent analysis template, you can contact us and provide the template ID, and Tencent Cloud MPS developers will configure and enable the video splitting feature for you.
    Note:
    The preset video splitting template (template ID: 27) defaults to the news splitting scenario. For other splitting scenarios, use the API to specify a splitting scenario through parameters. For details, refer to the above Integration Method 1: Initiating a Task via API.
    
    1.4 After specifying the save path for the output video, click Create to initiate the task.
    
    
    

    2. Querying Task Results

    Refer to the above Querying Task Results.

    3. Automatically Triggering a Task (Optional Capability)

    If you require automatically performing video splitting according to the preset parameters after a video file is uploaded in the COS bucket, you can:
    3.1 When creating a task, click Save The Orchestration, and configure parameters such as Trigger Bucket and Trigger Directory in the pop-up window.
    
    
    
    3.2 Go to the VOD Orchestration list, find the new orchestration, and turn on the switch at Enable. Subsequently, any new video files added to the trigger directory will automatically initiate tasks according to the preset process and parameters of the orchestration, and the processed video files will be saved to the output path configured in the orchestration.
    Note:
    It takes 3-5 minutes for the orchestration to take effect after being enabled.
    
    
    

    Processing Live Streams

    Integration Method: Initiating a Task via API

    Call the ProcessLiveStream API, select AiAnalysisTask, and set AiAnalysisTaskInput - Definition to 27 (preset video splitting template).
    Enter extended parameters in ExtendedParameter for specific capabilities.

    1. Specifying a Splitting Scenario

    Live streams currently support news splitting and NLP splitting scenarios, and do not support the target splitting scenario. For details, see the above Specifying a Splitting Scenario.

    2. Querying Task Results

    Receive task callbacks: When initiating an MPS task using ProcessLiveStream, set callback information through the TaskNotifyConfig parameter. During live stream processing, the task results will be called back in real time through the configured callback information. You can refer to ParseLiveStreamProcessNotification to parse the AiAnalysisResultInfo field to obtain the task results.
    
    
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support