GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT, REPLICATION SLAVE, SELECT ON *.* TO 'migration account'@'%' IDENTIFIED BY 'migration password';FLUSH PRIVILEGES;
Parameter | Description |
Billing Mode | Monthly subscription and pay-as-you-go billing modes are supported. |
Source Instance Type | Select TDSQL for MySQL, which cannot be changed after purchase. |
Source Instance Region | Select the source instance region, which cannot be changed after purchase. |
Target Instance Type | Select Kafka, which cannot be changed after purchase. |
Target Instance Region | Select the target instance region, which cannot be changed after purchase. |
Specification | Select a specification based on your business needs. The higher the specification, the higher the performance. For more information, see Billing Overview. |
Setting Items | Parameter | Description |
Task Configuration | Task Name | DTS will automatically generate a task name, which is customizable. |
| Running Mode | Immediate execution and scheduled execution are supported. |
Source Instance Settings | Source Instance Type | The source database type selected during purchase, which cannot be changed. |
| Source Instance Region | The source instance region selected during purchase, which cannot be changed. |
| Access Type | Select a type based on your scenario. In this scenario, you can only select Database. |
| Account/Password | Account/Password: Enter the source database account and password. |
Target Instance Settings | Target Instance Type | The target instance type selected during purchase, which cannot be changed. |
| Target Instance Region | The target instance region selected during purchase, which cannot be changed. |
| Access Type | Select a type based on your scenario. In this scenario, select CKafka instance. |
| Instance ID | Select the instance ID of the target instance. |
Setting Items | Parameter | Description |
Data Initialization Option | Initialization Type | Structure initialization: Table structures in the source instance will be initialized into the target instance before the sync task runs. Full data initialization: Data in the source instance will be initialized into the target instance before the sync task runs. If you select Full data initialization only, you need to create the table structures in the target database in advance. Both options are selected by default, and you can deselect them as needed. |
| Format of Data Delivered to Kafka | Avro adopts the binary format with a higher consumption efficiency, while JSON adopts the easier-to-use lightweight text format. |
Policy for Syncing Data to Kafka | Topic Sync Policy | Deliver to custom topic: Customize the topic name for delivery. After that, the target Kafka will automatically create a topic with the custom name. The synced data is randomly delivered to different partitions under the topic. If the target Kafka fails to create the topic, the task will report an error. Deliver to a single topic: Select an existing topic on the target side, and then deliver data based on multiple partitioning policies. Data can be delivered to a single partition of the specified topic, or delivered to different partitions by table name or by table name + primary key. |
| Rules for delivering to custom topic | If you add multiple rules, the database and table rules are matched one by one from top to bottom. If no rules are matched, data will be delivered to the topic corresponding to the last rule. If multiple rules are matched, data will be delivered to the topics corresponding to all the matched rules. Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Users" database to a topic named "Topic_A". The rules are configured as follows: Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and .* for Table Name Match.Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match.Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Student" table and "Teacher" tables to topics named "Topic_A" and "Topic_default" respectively. The rules are configured as follows: Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and ^Student$ for Table Name Match.Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match. |
| Rules for delivering to a single topic | After selecting a specified topic, the system will perform partitioning based on the specified policy as follows. Deliver all data to partition 0: Deliver all the synced data of the source database to the first partition. By table name: Partition the synced data from the source database by table name. After setting, the data with the same table name will be written into the same partition. By table name + primary key: Partition the synced data from the source database by table name and primary key. This policy is suitable for frequently accessed data. After settings, frequently accessed data is distributed from tables to different partitions by table name and primary key, so as to improve the concurrent consumption efficiency. |
| Topic for DDL Storage | (Optional) If you need to deliver the DDL operation of the source database to the specified topic separately, you can select the settings here.
After setting, it will be delivered to Partition 0 of the selected topic by default; if not set, it will be delivered based on the topic rules selected above. |
Setting Items | Parameter | Description |
Data Sync Option | SQL Type | The following operations are supported: INSERT, DELETE, UPDATE, and DDL. |
Sync Object Option | Database and Table Objects of Source Instance | Only the database/table objects can be synced. |
Was this page helpful?