General
How does data migration/sync with DTS affect the source database?
The data in the source database will not be affected and can be normally read/written. Data migration with DTS essentially replicates the data from the source database to the target database, without deleting or affecting such data.
The performance (especially the CPU) of the source database will be slightly affected. When DTS performs full data migration/sync, it will read the full data of the source database, which may increase the pressure of the source database.
If the source database is MySQL (8-core and 16 GB MEM), the DTS task uses eight concurrent threads by default, and there is no network bottleneck, then the task will impact the source database performances as follows:
Full export stage: DTS occupies about 18%–45% of the source database CPU, increases the query pressure on the source database by about 40-60 MB/s, and occupies about 8 active session connections.
Incremental export stage: There is almost no pressure on the source database, and only one connection is used to listen on the source database binlogs in real time.
In the DTS full export stage, the connection details of the source database are as follows:
1. In the initial stage of a task, one thread queries the system table to obtain the export information with the following SQL statements:
SELECT TABLE_NAME, TABLE_TYPE, ENGINE, TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='db'
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE ABLE_SCHEMA='db' and TABLE_NAME='table' and CONSTRAINT_TYPE='PRIMARY KEY';
2. In the structural export stage, a connection with eight (by default) export threads executes the following SQL statements:
SELECT COLUMN_NAME, DATA_TYPE, COLUMN_TYPE, NUMERIC_PRECISION, NUMERIC_SCALE, CHARACTER_SET_NAME, COLUMN_DEFAULT, IS_NULLABLE from information_schema.COLUMNS "
"where TABLE_SCHEMA='db' and TABLE_NAME='db';
3. In the data export stage, a connection with eight (by default) export threads executes the following SQL statements:
SELECT /*!40001 SQL_NO_CACHE */ column FROM db.table where id>= 'id' AND id <
'id';
4. For data export without locks, a table lock SQL statement like the following will be used to obtain the consistency offset of tables without a primary key. Once the offset is obtained, the tables will be unlocked.
5. For data export with locks, a global lock will be added with a SQL statement like the following.
flush table xxx with read lock
How does data migration with DTS affect the target database?
When DTS writes full data to the target database, it mainly impacts the CPU and IOPS of the target database.
If the source database is MySQL (8-core and 16 GB MEM), the DTS task uses eight concurrent threads by default, and there is no network bottleneck, then the task will impact the target database performances in the full import stage as follows: DTS occupies about 20%–49% of the target database CPU, 1,200-3,100 IOPS, and less than 8 active session connections.
In the full import stage, the connection details of the target database are as follows:
Less than eight connections are creating structures in batches.
Less than eight connections are writing data in batches with a SQL statement like the following:
insert into xxx (id,name,msg) values (xxx);
In the incremental import stage, DTS parses the incremental data in the source database binlogs into SQL statements and then executes them in the target database. The total number of connections/sessions is up to 32.
DDL statements are executed sequentially, with no parallel execution of other DML statements.
A DML statement supports up to 32 non-persistent connections that time out in 30 seconds. DML statements include INSERT, UPDATE, DELETE, and REPLACE.
Does the target database need to be empty when I use DTS to migrate data?
It depends on whether you choose to migrate the entire instance or specified objects.
Migrating the entire instance: The target database needs to be empty; otherwise, the system verification will fail, and the task cannot be initiated.
Migrating specified objects: You can specify objects such as databases and tables for migration. The system will verify whether the source and target databases have tables with the same name, and if so, the verification will fail, and you will be prompted to make changes first.
Does DTS allow a read-only source instance in data migration?
Yes. In scenarios where the source database is a self-built one, you can enter the IP address of the read-only instance when configuring the connection to the source database. In scenarios where the source database is a TencentDB instance, read-only instances are only supported for data subscription, and you need to submit a ticket for application.
Does DTS allow a replica or secondary source database in data migration?
In scenarios where the source database is a self-built one, you can enter the IP address of the replica or secondary database when configuring the connection to the source database. In scenarios where the source database is a TencentDB instance, you can only select the instance ID but cannot connect to the replica or secondary database when configuring the source database connection.
Does the source database support data writes during migration?
Yes. Data can be normally written to the source database during the migration process, but the source database doesn't support DDL operations in the structural and full migration stages; otherwise, the migration task may fail. DDL and DML operations can be normally performed in the incremental stage.
Does the target database support data writes during migration?
Although the target database allows data writing, it is advisable to refrain from doing so. Concurrent writes to the target database during the migration process can potentially lead to data inconsistency between the source and target databases. It is not necessary to configure the target database as read-only.
Does DTS support migration from one local database to another?
No. There must be at least one TencentDB database, either the source or target database.
Does DTS support data migration between TencentDB instances under two different Tencent Cloud accounts?
Yes. For migration between TencentDB instances under two different Tencent Cloud accounts, you need to log in to DTS with the Tencent Cloud account of the target instance. For detailed directions, see Cross-Account TencentDB Instance Migration. Can DTS migrate different tables in the same database instance?
No. DTS only supports data migration between different source and target databases.
Can I configure multiple DTS tasks for migration from the same source database to different TencentDB instances?
Yes. You can migrate data from the same source database to multiple target databases (multiple tasks can run concurrently) and vice versa (a new task can only be initiated after the previous task enters the incremental migration stage). Note that multiple concurrent tasks may increase the access pressure on the source or target databases and thus slow down the migration. If you need to create multiple migration tasks for the same source database, then after creating the first task, you can quickly create similar tasks by clicking More > Create similar task in the Operation column.
Does DTS support scheduled automatic migration?
Yes. When modifying the configuration for a created data migration task, you can select scheduled migration and specify the start time.
Can I monitor the task progress during migration?
Yes. You can log in to the DTS console and view the migration task progress on the Data Migration page. Why is there a 15-day limit on incremental migration?
Currently, incremental migration is performed through the nearest proxy server via Tencent Cloud Direct Connect, which eliminates network jitters and ensures the quality of data transfer. The 15-day limit can reduce the connection pressure on the proxy server, and it is only intended for reasonable utilization of resources for migration. Connections will not be forcibly closed after 15 days.
How is data accuracy ensured during data migration?
DTS uses Tencent Cloud's proprietary data migration architecture to verify data accuracy in real time and quickly detect and correct errors. This guarantees the reliability of the transferred data.
Why does data verification require that the source database instance not be read-only?
This is because data verification requires creating a new database __tencentdb__
in the source instance and writing the checksum table to the database. If the instance is read-only, data verification will be skipped.
Can I specify tables for migration with DTS?
Yes. You can select an entire instance or specify databases/tables as the migration object.
When does data migration stop?
When you select incremental migration, if it takes a long time before the task stops, you may need to stop it by yourself.
If you select Structural migration or Full migration as the Migration Type, the task will automatically stop upon completion.
If you select Full + incremental migration as the Migration Type, after full migration is completed, the migration task will automatically enter the incremental data sync stage, which will not stop automatically. You need to click Complete to manually stop the incremental data sync.
Manually complete incremental data sync and business switchover at appropriate time.
Check whether the migration task is in the incremental sync stage without any lag. If so, stop writing data from the source database for a few minutes.
Manually complete incremental sync when the data gap between the target and the source databases is 0 MB and the time lag between them is 0 seconds.
Why does the data size change before and after full migration?
This is because the fragmented spaces of the source and target databases are different, and the source database may contain data holes. In this case, after full migration is completed, the table storage space in the target database may be smaller than that in the source database. We recommend that you perform a data consistency check as instructed in Creating Data Consistency Check Task after the migration is completed to check whether the contents of the source database and the target database are consistent. Does DTS support cross-country/region database migration?
Yes. You can implement cross-country/region data transfer over the public network.
Can I resume a DTS migration task that is interrupted abnormally?
Yes. You can configure an automatic retry policy for the migration task. When the task is interrupted by exceptions (such as the source/target database downtime or network issues), DTS will automatically retry the task in the specified time range.
MySQL
Will tables be locked during MySQL migration?
Locking in the MySQL migration process refers to adding a global lock (FTWRL) to the source database. This will be involved only in case of full data migration.
Currently, migration without locks is implemented for migration links between MySQL, MariaDB, Percona, TDSQL-C for MySQL, TDSQL for MySQL, and TDSQL for TDStore. In this scenario, no global lock (the FTWRL lock) is added, and only tables without a primary key are locked.
Can I migrate tables without a primary key?
Yes. However, tables with a primary key are recommended. You can initiate a task to migrate tables without a primary key, but there may be some problems:
Migrating or syncing tables without a primary key may cause data duplication.
Performing DML operations on such tables may cause a data sync delay.
Tables without a primary will be skipped for data consistency check.
If the source database is Alibaba Cloud ApsaraDB for RDS or PolarDB, it will add an additional primary key column to tables without a primary key or non-null unique key in the binlog. The added primary key column is invisible in the table structure and thus may not be identified by DTS, and the data result may be abnormal.
Can I migrate TencentDB for MySQL single-node (formerly the Basic Edition) instances?
A TencentDB for MySQL single-node instance can be used as the source database for migration over the public network rather than the private network.
It cannot be used as the target database for migration.
After setting binlog_format
to row
, you need to reset all business connections to the current database. If the source database is a replica, you also need to reset the master-replica sync SQL thread to prevent current business connections from writing data in the old format.
Before the above operation is completed, do not create or start a migration task; otherwise, data inconsistency may occur.
What should I check if the TokuDB engine is used in the source instance?
In this case, TokuDB will be converted to InnoDB by default during migration. As tables containing clustered indexes or compressed with TokuDB need to be preprocessed before migration, they are not supported currently. DDL operations on TokuDB are not supported either.
Can I migrate user permissions during the migration of MySQL data?
Yes. NewDTS supports migrating user permissions. For detailed directions, see Account Migration.
Was this page helpful?