Parameter | Description |
Task Name | Required fields. |
Task Type | Select Offline Synchronization. |
Development Mode | Form Mode: Provides only read and write nodes, suitable for fixed field synchronization from single table to single table. Applicable to ODS layer data synchronization that does not require data cleaning. Canvas Mode: Provides read, write, and conversion nodes. Suitable for data linkages involving cleaning, and many-to-many data link. Script Mode: Supports initialization of the script mode configuration page, allowing users to select different data sources and targets, displaying the corresponding script template Users need to select the data source and destination first, and editing is not allowed in the unselected state. After selection, the corresponding script module is displayed. In the script, users can manually write parameters such as data source, connection information. Supports writing SQL statements in the script, putting the query SQL into the connection. |
Workflow Directory | Select an existing workflow. |
Parameter | Description(Optional) | |
Data Type | | Current node's data source type. |
Adding Method | | Additional Fields: Append the newly parsed fields after the original fields of the table. Overwrite existing fields: The newly parsed fields overwrite the original field information of the current source table. |
Field Acquisition | | Text parsing: Parse based on the text content. JSON parsing: Input JSON content and quickly parse the content based on key/value, such as {"age":10,"name":"demo"}. Fetch from Homogeneous Table: Specify a table object from a data source and parse its fields. |
Text to Be Parsed | Separator | Used to separate field names and types, supports tab, |, space, e.g., "age|int". |
| Quick Filling of Field Type | Common field types, supports constants, functions, variables, string, boolean, date, datetime, timestamp, time, double, float, tinyint, smallint, tinyint unsigned, int, mediumint, smallint unsigned, bigint, int unsigned, bigint unsigned, double precision, tinyint(1), char, varchar, text, varbinary, blob. |
| Parse Data | Parse input content. |
Preview | Batch Delete | After selecting the preview list, batch delete the parsing results. |
| Field Name | Field name. |
| Type | Field Type. |
Parameter | Description(Optional) | |
Mapping by the same name | | Establish a mapping relationship between the source table field and the target table field with the same field name. |
Peer Mapping | | Establish a mapping relationship between the source table field and the target table field in the same row number. |
Clear Mappings | | Clear the established mapping relationship between the source table field and the target table field. |
Pinning Mapped | | Pin to the top and format the display of already established mapping relationships; this formatting does not affect the actual storage field order of the table, it is only used for frontend optimization display. |
Manual connection mapping | | Supports manually establishing the mapping relationship between source table fields and target table fields through connections. |
Source Table | Source Table Field Name | The name of the source table field. |
| Type | The type of the source table field. |
| Mapping | Quickly create mapping. |
Target Table | Target Table Field Name | The name of the target table field. |
| Type | The type of the target table field. |
Category | Parameter | Description |
Basic information | Task Name/Type | Displays the basic information of current task name and type. |
| Task Owner | Name of one or more space members responsible for this task, by default, the task creator. |
| Description(Optional) | Displays the remark information of the current task. |
| Scheduling Parameters(Optional) | Scheduling parameters are used during task scheduling. They will be automatically replaced according to the business time of task scheduling and the value format of scheduling parameters, realizing dynamic value retrieval within the task scheduling time |
Channel Settings | Dirty Data Threshold | Dirty data refers to data that fails to write during synchronization. The dirty data threshold refers to the maximum number of dirty data entries or byte count that can be tolerated during synchronization. If this threshold is exceeded, the task will automatically end. The default threshold is 0, meaning dirty data is not tolerated. |
| Concurrent Number | The maximum number of concurrent operations expected during actual execution. Due to resources, data source types, and task optimization results, the actual number of concurrent operations may be less than or equal to this value. The larger this value is, the more pre-allocated execution machine resources. Note: When concurrency >1, if the source data supports setting a splitting key, the splitting key is mandatory; otherwise, the set concurrency value will not take effect. |
| Sync Rate Limit | Limit the synchronization rate by traffic or number of records to protect the read and write pressure on the data source endpoint or data destination endpoint. This value is the maximum operating rate, with the default -1 indicating no rate limit. |
Feature | Description(Optional) |
Scheduling Cycle | The execution cycle unit for task scheduling supports minute, hour, day, week, month, year, and one-time. |
Effective Time | The valid time period for scheduling time configuration. The system will automatically schedule within this time range according to the time configuration, and will no longer automatically schedule after the validity period. |
Execution time | Users can set the duration for each execution interval of the task and the specific start time of the task execution. If the weekly interval is 10 minutes, then the scheduled task will run once every 10 minutes from 00:00 to 23:59 every day between March 27, 2022, and April 27, 2022. |
Scheduling Plan | It will be automatically generated based on the setting of the periodic time. |
Self-Dependency | Configure the self-dependency attribute uniformly for computation tasks in the current workflow. |
Workflow Self-Dependency | When enabled, it indicates that the calculation tasks in the current workflow depend on all calculation tasks from the previous period of the current workflow. The workflow self-dependency feature only takes effect when the tasks in the current workflow have the same scheduling cycle and are on a daily cycle. |
Serial number | Parameter | Description(Optional) |
1 | Save | Click the icon to save the current task node. |
2 | Submit | Click the icon to submit the task node to the scheduling system (node basic content, scheduling configuration attributes), and generate a new version record. Feature limitation: Tasks can be submitted normally only after their data sources and scheduling conditions are fully set. |
3 | Lock/Unlock | Click the icon to lock/unlock the editing of the current file. If the task has been locked by someone else, it cannot be edited. |
4 | Running | Click the icon to debug and run the current task node. |
5 | Advanced Running | Click the icon to run the current task node with variables. The system will automatically pop up the time parameters and custom parameters used in the code. |
6 | Stop Running | Click the icon to stop debugging and running the current task node. |
7 | Refresh | Click the icon to refresh the content of the current task node. |
8 | Project Parameter | Click the icon to display the parameter settings of the project. If you need to modify them, please go to Project Management. |
9 | Task Ops | Click the icon to go to the Task Ops page. |
10 | Instance Ops | Click the icon to go to the Instance Operation and Maintenance page. |
11 | Form Conversion | Click the icon to convert the task configuration to Form Mode. |
12 | Canvas Conversion | Click the icon to convert the task configuration to Canvas Mode. |
13 | Script Conversion | Click the icon to convert the task configuration to Script Mode. Once converted, it cannot be reverted to Form/Canvas Mode |
Was this page helpful?