tencent cloud

文档反馈

CreateSparkApp

最后更新时间:2024-08-08 15:32:09

    1. API Description

    Domain name for API request: dlc.tencentcloudapi.com.

    This API is used to create a Spark job.

    A maximum of 20 requests can be initiated per second for this API.

    We recommend you to use API Explorer
    Try it
    API Explorer provides a range of capabilities, including online call, signature authentication, SDK code generation, and API quick search. It enables you to view the request, response, and auto-generated examples.

    2. Input Parameters

    The following request parameter list only provides API request parameters and some common parameters. For the complete common parameter list, see Common Request Parameters.

    Parameter Name Required Type Description
    Action Yes String Common Params. The value used for this API: CreateSparkApp.
    Version Yes String Common Params. The value used for this API: 2021-01-25.
    Region Yes String Common Params. For more information, please see the list of regions supported by the product.
    AppName Yes String The Spark job name.
    AppType Yes Integer The Spark job type. Valid values: 1 for Spark JAR job and 2 for Spark streaming job.
    DataEngine Yes String The data engine executing the Spark job.
    AppFile Yes String The path of the Spark job package.
    RoleArn Yes Integer Data visiting policy achieved through CAM Role arn; the console can obtain it through Data Job -> Job Configuration; SDK can obtain corresponding values through the DescribeUserRoles API.
    AppDriverSize Yes String The driver size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
    AppExecutorSize Yes String The executor size. Valid values: small (default, 1 CU), medium (2 CUs), large (4 CUs), and xlarge (8 CUs).
    AppExecutorNums Yes Integer Number of Spark job executors
    Eni No String This field has been disused. Use the Datasource field instead.
    IsLocal No String The source of the Spark job package. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
    MainClass No String The main class of the Spark job.
    AppConf No String Spark configurations separated by line break
    IsLocalJars No String The source of the dependency JAR packages of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
    AppJars No String The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.
    IsLocalFiles No String The source of the dependency files of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
    AppFiles No String The dependency files of the Spark job (files other than JAR and ZIP packages) separated by comma.
    CmdArgs No String The input parameters of the Spark job, separated by comma.
    MaxRetries No Integer The maximum number of retries, valid for Spark streaming tasks only.
    DataSource No String The data source name.
    IsLocalPythonFiles No String The source of the PySpark dependencies. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
    AppPythonFiles No String The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.
    IsLocalArchives No String The source of the dependency archives of the Spark job. Valid values: cos for COS and lakefs for the local system (for use in the console, but this method does not support direct API calls).
    AppArchives No String The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.
    SparkImage No String The Spark image version.
    SparkImageVersion No String The Spark image version name.
    AppExecutorMaxNumbers No Integer The specified executor count (max), which defaults to 1. This parameter applies if the "Dynamic" mode is selected. If the "Dynamic" mode is not selected, the executor count is equal to AppExecutorNums.
    SessionId No String The ID of the associated Data Lake Compute query script.
    IsInherit No Integer Whether to inherit the task resource configuration from the cluster template. Valid values: 0 (default): No; 1: Yes.
    IsSessionStarted No Boolean Whether to run the task with the session SQLs. Valid values: false for no and true for yes.

    3. Output Parameters

    Parameter Name Type Description
    SparkAppId String The unique ID of the application.
    Note: This field may return null, indicating that no valid values can be obtained.
    RequestId String The unique request ID, generated by the server, will be returned for every request (if the request fails to reach the server for other reasons, the request will not obtain a RequestId). RequestId is required for locating a problem.

    4. Example

    Example1 Creating a Spark job

    This example shows you how to create a Spark job.

    Input Example

    POST / HTTP/1.1
    Host: dlc.tencentcloudapi.com
    Content-Type: application/json
    X-TC-Action: CreateSparkApp
    <Common request parameters>
    
    {
        "AppName": "spark-test",
        "AppType": 1,
        "DataEngine": "spark-engine",
        "Eni": "kafka-eni",
        "IsLocal": "cos",
        "AppFile": "test.jar",
        "RoleArn": 12,
        "MainClass": "com.test.WordCount",
        "AppConf": "spark-default.properties",
        "IsLocalJars": "cos",
        "AppJars": "com.test2.jar",
        "IsLocalFiles": "cos",
        "AppFiles": "spark-default.properties",
        "AppDriverSize": "small",
        "AppExecutorSize": "small",
        "AppExecutorNums": 1,
        "AppExecutorMaxNumbers": 1
    }
    

    Output Example

    {
        "Response": {
            "SparkAppId": "2aedsa7a-9f72-44aa-9fd4-65cb739d6301",
            "RequestId": "2ae4707a-9f72-44aa-9fd4-65cb739d6301"
        }
    }
    

    5. Developer Resources

    SDK

    TencentCloud API 3.0 integrates SDKs that support various programming languages to make it easier for you to call APIs.

    Command Line Interface

    6. Error Code

    The following only lists the error codes related to the API business logic. For other error codes, see Common Error Codes.

    Error Code Description
    FailedOperation The operation failed.
    InternalError.InternalSystemException The business system is abnormal. Please try again or submit a ticket to contact us.
    InvalidParameter.InvalidAppFileFormat The specified Spark task package file format does not match. Currently, only .jar or .py is supported.
    InvalidParameter.InvalidDriverSize The current DriverSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
    InvalidParameter.InvalidExecutorSize The current ExecutorSize specification only supports small/medium/large/xlarge/m.small/m.medium/m.large/m.xlarge.
    InvalidParameter.InvalidFilePathFormat The specified file path format is not compliant. Currently, only cosn:// or lakefs:// is supported.
    InvalidParameter.InvalidRoleArn The CAM role arn is invalid.
    InvalidParameter.SparkJobNotUnique The specified Spark task already exists.
    InvalidParameter.SparkJobOnlySupportSparkBatchEngine Spark tasks can only be run using the Spark job engine.
    ResourceNotFound.DataEngineNotFound The specified engine does not exist.
    ResourceNotFound.SessionInsufficientResources There are currently no resources to create a session. Please try again later or use an annual or monthly subscription cluster.
    联系我们

    联系我们,为您的业务提供专属服务。

    技术支持

    如果你想寻求进一步的帮助,通过工单与我们进行联络。我们提供7x24的工单服务。

    7x24 电话支持