tencent cloud

Feedback

Last updated: 2024-11-01 16:26:14
    Note:
    You need to bind the DLC engine. Currently, three engines are supported: Spark SQL, Spark Job, and Presto. For engine kernels, see DLC Engine Kernel Version.
    1. The current user needs the corresponding DLC computing resources and database table permissions
    2. The corresponding database table has already been created in DLC.

    Feature Overview

    Submit a DLC SQL task execution to the WeData workflow scheduling platform. When selecting the DLC data source type, provide "advanced settings" to support the configuration of Presto and Sparksql parameters.
    When using the Spark Job Engine, you can configure job resource specifications and parameters. The resource configuration must not exceed the limitations of the computing resource itself.
    
    Configuration instructions:
    Configuration Item
    Description
    Resource Configuration Method
    Divided into two methods: Cluster Default Configuration and Custom Configuration
    1. Use Cluster Default Configuration
    Use the current task's Compute Resource Cluster Configuration
    2. Custom
    User Custom Executor and Driver Configuration
    Executor Resources
    Fill in the required resource count. 1CU is roughly equivalent to 1 CPU core and 4GB memory.
    1. Small: Single compute unit (1CU)
    2. Medium: Two compute units (2CU)
    3. Large: Four compute units (4CU)
    4. Xlarge: Eight compute units (8CU)
    Number of Executors
    Executor is a compute node or instance responsible for executing tasks and handling compute work. Each Executor uses the configured resource count.
    driver Resources
    Fill in the required Driver resource count. 1CU is roughly equivalent to 1 CPU core and 4GB memory.
    1. Small: Single compute unit (1CU)
    2. Medium: Two compute units (2CU)
    3. Large: Four compute units (4CU).
    4. Xlarge: Eight compute units (8CU).

    Sample code

    -- Create a User Information Table
    create table if not exists wedata_demo_db.user_info (
    user_id string COMMENT 'User ID',
    user_name string COMMENT 'Username',
    user_age int COMMENT 'Age',
    city string COMMENT 'City'
    ) COMMENT 'User Information Table';
    
    -- Insert data into the User Information Table
    insert into wedata_demo_db.user_info values ('001', 'Zhang San', 28, 'Beijing');
    insert into wedata_demo_db.user_info values ('002', 'Li Si', 35, 'Shanghai');
    insert into wedata_demo_db.user_info values ('003', 'Wang Wu', 22, 'Shenzhen');
    insert into wedata_demo_db.user_info values ('004', 'Zhao Liu', 45, 'Guangzhou');
    insert into wedata_demo_db.user_info values ('005', 'Xiao Ming', 20, 'Beijing');
    insert into wedata_demo_db.user_info values ('006', 'Xiao Hong', 30, 'Shanghai');
    insert into wedata_demo_db.user_info values ('007', 'Xiao Gang', 25, 'Shenzhen');
    insert into wedata_demo_db.user_info values ('008', 'Xiao Li', 40, 'Guangzhou');
    insert into wedata_demo_db.user_info values ('009', 'Xiao Zhang', 23, 'Beijing');
    insert into wedata_demo_db.user_info values ('010', 'Xiao Wang', 50, 'Shanghai');
    
    select * from wedata_demo_db.user_info;
    Note:
    When using Iceberg External Tables, the SQL syntax differs from that of Iceberg Native Tables. For details, see Differences Between DLC Iceberg External Tables and Native Tables SQL Syntax.

    Presto Engine Sample Code:

    Applicable Table Types: Native Iceberg Table, External Iceberg Table.
    CREATE TABLE `cpt_demo`.`dempts` (
    id bigint COMMENT 'id number',
    num int,
    eno float,
    dno double,
    cno decimal(9,3),
    flag boolean,
    data string,
    ts_year timestamp,
    date_month date,
    bno binary,
    point struct<x: double, y: double>,
    points array<struct<x: double, y: double>>,
    pointmaps map<struct<x: int>, struct<a: int>>
    )
    COMMENT 'table documentation'
    PARTITIONED BY (bucket(16, id), years(ts_year), months(date_month), identity(bno), bucket(3, num), truncate(10, data));

    SparkSQL Engine Sample Code

    Applicable Table Types: Native Iceberg Table, External Iceberg Table.
    CREATE TABLE `cpt_demo`.`dempts` (
    id bigint COMMENT 'id number',
    num int,
    eno float,
    dno double,
    cno decimal(9,3),
    flag boolean,
    data string,
    ts_year timestamp,
    date_month date,
    bno binary,
    point struct<x: double, y: double>,
    points array<struct<x: double, y: double>>,
    pointmaps map<struct<x: int>, struct<a: int>>
    )
    COMMENT 'table documentation'
    PARTITIONED BY (bucket(16, id), years(ts_year), months(date_month), identity(bno), bucket(3, num), truncate(10, data));

    SparkSQL Job Engine Sample Code

    Applicable Table Types: Native Iceberg Table, External Iceberg Table.
    CREATE TABLE `cpt_demo`.`dempts` (
    id bigint COMMENT 'id number',
    num int,
    eno float,
    dno double,
    cno decimal(9,3),
    flag boolean,
    data string,
    ts_year timestamp,
    date_month date,
    bno binary,
    point struct<x: double, y: double>,
    points array<struct<x: double, y: double>>,
    pointmaps map<struct<x: int>, struct<a: int>>
    )
    COMMENT 'table documentation'
    PARTITIONED BY (id, ts_year, date_month);
    Note:
    For more DLC syntax, please refer to DLC SQL Syntax Overview.
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support