GRANT RELOAD,LOCK TABLES,REPLICATION CLIENT,REPLICATION SLAVE,SHOW VIEW,PROCESS,SELECT ON *.* TO 'account'@'%' IDENTIFIED BY 'password';FLUSH PRIVILEGES;
ALTER VIEW
statement is not supported and will be skipped during sync.STATEMENT
format into the source database.Operation Type | SQL Statement |
DML | INSERT, UPDATE, DELETE |
DDL | CREATE DATABASE、DROP DATABASE、ALTER DATABASE、CREATE TABLE、ALTER TABLE、DROP TABLE、TRUNCATE TABLE、CREATE VIEW、DROP VIEW、CREATE INDEX、DROP INDEX |
CREATE TABLE table name AS SELECT
is not supported.Type | Environment Requirements |
Requirements for the source database | The source and target databases can be connected. Requirements for instance parameters: The server_id parameter in the source database must be set manually and cannot be 0.row_format for the source databases/tables cannot be set to FIXED .The connect_timeout variable in the source database must be greater than or equal to 10.Requirements for binlog parameters: The log_bin variable in the source database must be set to ON .The binlog_format variable in the source database must be set to ROW .The binlog_row_image variable in the source database must be set to FULL .On MySQL 5.6 or later, if the gtid_mode variable is not ON , an alarm will be triggered. We recommend you enable gtid_mode .It is not allowed to set do_db and ignore_db .If the source instance is a replica database, the log_slave_updates variable must be set to ON .We recommend that you retain the binlog of the source database for at least three days; otherwise, the task cannot be resumed from the checkpoint and will fail. Foreign key dependency: Foreign key dependency can be set to only one of the following two types: NO ACTION and RESTRICT . Other types may affect data consistency results.During partial table sync, tables with foreign key dependency must be migrated. The environment variable innodb_stats_on_metadata must be set to OFF . |
Requirements for the target CKafka | The target CKafka can be connected. The upper limit of message size in target CKafka must be greater than the maximum value of a single row of data in the source database table. |
Parameter | Description |
Billing Modes | Monthly subscription and pay-as-you-go billing modes are supported. |
Source Instance Type | Select MySQL, which cannot be changed after purchase. |
Source Instance Region | Select the source instance region, which cannot be changed after purchase. If the source database is a self-built one, select a region nearest to it. |
Target Instance Type | Select Kafka, which cannot be changed after purchase. |
Target Instance Region | Select the target instance region, which cannot be changed after purchase. |
Specifications | Select a specification based on your business needs. The higher the specification, the higher the performance. For more information, see Billing Overview. |
Setting Items | Parameter | Description |
Task Configuration | Task Name | DTS will automatically generate a task name, which is customizable. |
| Running Mode | Immediate execution and scheduled execution are supported. |
Source Instance Settings | Source Instance Type | The source database type selected during purchase, which cannot be changed. |
| Source Instance Region | The source instance region selected during purchase, which cannot be changed. |
| Access Type | Select a type based on your scenario. For the preparations for different access types, see Preparations Overview. Public Network: The source database can be accessed through a public IP. Self-Build on CVM: The source database is deployed in a CVM instance. Direct Connect: The source database can be interconnected with VPCs through Direct Connect. VPN Access: The source database can be interconnected with VPCs through VPN Connection. Database: The source database is a TencentDB instance. CCN: The source database can be interconnected with VPCs through CCN. |
| Public Network | Host Address: IP address or domain name of the source database. Port: Port used by the source database. |
| Self-Build on CVM | CVM Instance: The ID of the CVM instance. Port: Port used by the source database. |
| Direct Connect | VPC-based Direct Connect Gateway: Only VPC-based direct connect gateway is supported. Confirm the network type associated with the gateway. VPC: Select a VPC and subnet associated with the VPC-based Direct Connect Gateway or VPN Gateway. Host Address: IP address of the source database. Port: Port used by the source database. |
| VPN Access | VPN Gateway: Select a VPN Gateway instance. VPC: Select a VPC and subnet associated with the VPC-based Direct Connect Gateway or VPN Gateway. Host Address: IP address of the source database. Port: Port used by the source database. |
| Database | Instance Name: The ID of the source database instance. |
| CCN | Host Address: IP address of the source database server.</td> Port: Port used by the source database. VPC-based CCN Instance: The name of the CCN instance. Accessed VPC: The accessed VPC refers to the VPC in CCN over which the subscription linkage is connected. You need to select a VPC other than the VPC to which the source database belongs.
For example, if the database in Guangzhou is used as the source database, select other region, such Chengdu-VPC or Shanghai-VPC, as the accessed VPC. Subnet: Name of the subnet of the selected VPC. Region of Accessed VPC: The region of the source database selected during task purchase must be the same as the region of the accessed VPC; otherwise, DTS will change the former to the latter. |
| Account/Password | Account/Password: Enter the database account and password. |
Target Instance Settings | Target Instance Type | The target instance type selected during purchase, which cannot be changed. |
| Target Instance Region | The target instance region selected during purchase, which cannot be changed. |
| Access Type | Select a type based on your scenario. In this scenario, select CKafka instance. |
| Instance ID | Select the instance ID of the target instance. |
Setting Items | Parameter | Description |
Data Initialization Option | Initialization Type | Structure initialization: Table structures in the source instance will be initialized into the target instance before the sync task runs. Full data initialization: Data in the source instance will be initialized into the target instance before the sync task runs. If you select Full data initialization only, you need to create the table structures in the target database in advance. Both options are selected by default, and you can deselect them as needed. |
| Format of Data Delivered to Kafka | Avro adopt the binary format with a higher consumption efficiency, while JSON adopts the easier-to-use lightweight text format. |
Policy for Syncing Data to Kafka | Topic Sync Policy | Deliver to custom topic: Set the topic name for delivery by yourself. After setting, the target Kafka will automatically create the topic. The synced data is randomly delivered to different partitions under the topic. If the target Kafka fails to create the topic, the task will report an error. Deliver to a single topic: Select an existing topic on the target side, and then deliver based on multiple partitioning policies. Data can be delivered to a single partition of the specified topic, or delivered to different partitions by table name or by table name + primary key. |
| Rules for delivering to custom topic | If you add multiple rules, the database and table rules are matched one by one from top to bottom. If no rules are matched, data will be delivered to the topic corresponding to the last rule. If multiple rules are matched, data will be delivered to the topics corresponding to all the matched rules. Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Users" database to a topic named "Topic_A". The rules are configured as follows: Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and .* for Table Name Match.Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match.Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Student" table and "Teacher" tables to topics named "Topic_A" and "Topic_default" respectively. The rules are configured as follows: Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and ^Student$ for Table Name Match.Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match. |
| Rules for delivering to a single topic | After selecting a specified topic, the system will perform partitioning based on the specified policy as follows. Deliver all data to partition 0: Deliver all the synced data of the source database to the first partition. By table name: Partition the synced data from the source database by table name. After setting, the data with the same table name will be written into the same partition. By table name + primary key: Partition the synced data from the source database by table name and primary key. This policy is suitable for frequently accessed data. After settings, frequently accessed data is distributed from tables to different partitions by table name and primary key, so as to improve the concurrent consumption efficiency. |
| Topic for DDL Storage | (Optional) If you need to deliver the DDL operation of the source database to the specified topic separately, you can select the settings here.
After setting, it will be delivered to Partition 0 of the selected topic by default; if not set, it will be delivered based on the topic rules selected above. |
Setting Items | Parameter | Description |
Data Sync Option | SQL Type | The following operations are supported: INSERT, DELETE, UPDATE, and DDL. If Custom DDL is selected, you can choose different DDL sync policies as needed. For more information, see Setting SQL Filter Policy. |
Sync Object Option | Database and Table Objects of Source Instance | Select objects to be synced, which can be basic tables, views, procedures, and functions. The sync of advanced objects is a one-time operation: only advanced objects already in the source database before the task start can be synced, while those added to the source database after the task start will not be synced to the target database. For more information, see Syncing Advanced Object. |
| Selected Object | After selecting the sync object on the left, click to see the selected object on the right. |
| Sync Online DDL Temp Table | If you perform an online DDL operation on tables in the source database with the gh-ost or pt-osc tool, DTS can migrate the temp tables generated by online DDL changes to the target database. If you select gh-ost, DTS will migrate the temp tables ( _table name_ghc , _table name_gho , and _table name_del ) generated by the gh-ost tool to the target database.If you select pt-osc, DTS will migrate the temp tables ( _table name_new and _table name_old ) generated by the pt-osc tool to the target database. |
Was this page helpful?