Error Code | Description | Scenario | Error Message | Analysis and Solution |
1227 | There is a permission issue. | Data migration/sync/subscription | Error 1227: Access denied. | Error analysis The account that executes the task doesn't have permission to connect to the source/target database. Solution |
1040 | There are too many database connections. | Data migration/sync | Error 1040: Too many connections. | Error analysis There are too many database connections. Solution Increase the maximum number of connections to the source database by increasing the value of max_connections , or run the task again during low business workload hours. |
1045 | The operation was denied. | Data migration/sync/subscription | Error 1045 (28000): Access denied for user '{{xx}}'@'{{xx}}' (using password: YES) | Error analysis The account permission or password is modified when the task is running. The DTS service IP addresses haven't been authorized in the source or target database. Solution Check whether you have modified the account or password. If so, undo the modification operation or change the account or password back. For more information, see Data Migration (NewDTS). Authorize the DTS service IP addresses as instructed in Adding DTS IP Address to Database Allowlist. |
1050 | The table already exists, and the DDL statement is executed repeatedly. | Data migration/sync | Error 1050: Table {{}} already exists, binlog position:<{{}}>, gtid:{{}}, related tables: {{}} | Error analysis When data is migrated or synced from multiple source databases to one target database, all the source databases executed the same DDL operation, causing a duplicate DDL operation in the target database. You can only select DDL operation in one of the multiple source databases. When the task is running, the target database has created the same table, and it repeats the same DDL operation when it receives data synced from the source database. The network is abnormal or the statement is executed for too long, causing duplicate DDL operations during the task retry process. Solution |
1054 | The table contains an unknown column | Data migration/sync | Error 1054: Unknown column {{}} related tables: {{}} | Error analysis The table structure is not selected as the migration/sync object before the task is started, and the target database doesn't contain this column. When the task is running, this column is deleted from the target database. Solution Check whether the target database contains this column, and if not, add this column to the target database and execute the task again. |
1062 | An error is reported due to the primary key conflict. | Data sync | Error 1062: Duplicate entry '{{xx}}' for key 'PRIMARY', related tables: '{{xx}}'. | Error analysis If you select Report for Primary Key Conflict Resolution in data sync, DTS will report an error when encountering a primary key conflict between the source and target databases. When the task is running, data is manually written to the target database, causing the target database to contain data with the same primary key. Before the task is started, the unique key check is disabled for the source database. Therefore, the source database already contains data with duplicate primary keys. When the task is running, the DELETE operation is not synced from the source database to the target database, causing a primary key conflict when data is inserted into the source database. Solution Check whether there are duplicate primary keys in the source database, and if so, handle them first. Modify or delete the primary key of the corresponding data table from the target database and execute the task again. |
1071 | The index field is too long. | Data migration/sync | Error 1071 (42000): Specified key was too long; max key length is 767 bytes. | Error analysis By default, a single-field index in the InnoDB engine can contain up to 767 bytes, that is, an index can contain up to 384 two-byte fields or 256 three-byte fields. GBK, UTF-8, and utf8mb4_unicode_ci are two-, three-, and four-byte character sets respectively. On MySQL 5.6 and later versions, all MyISAM tables are automatically converted to InnoDB tables. Therefore, with the MyISAM storage engine, a self-built database may contain combined index column of more than 767 bytes in length; however, the same table creation statement that can normally run in the self-built database won't work on MySQL 5.6 or later versions. Solution Modify the length of index columns in the erroneous rows in the file. Example: create table test(test varchar(255) primary key)charset=utf8; -- Successful create table test(test varchar(256) primary key)charset=utf8; |
1146 | The table doesn't exist. | Data migration/ Data sync | Error 1146: Table '{{xx}}' doesn't exist on query. | Error analysis When the task is running, the table is deleted from the target database. DDL statements for table structure change are executed in the source database in the data export stage. Before the task is started, the table structure is not selected as the migration/sync object. Solution Execute the show create table xxx statement in the target database to check whether the table exists, and if not, create it in the target database manually. |
1213 | A deadlock is caused by double write to the source and target databases. | Data migration/sync | Error 1213: Deadlock found when trying to get lock; try restarting transaction, related tables: '{{xx}}'. | Error analysis Your write operation conflicts with that performed by DTS in the target database, which causes a deadlock. Solution Stop the deadlock process and create the task again. We recommend that you control the lock logic for UPDATE operations in the instance, add an index to tables, and use row lock as much as possible to reduce lock overheads. |
1236 | There is a binlog issue in the source database. | Data migration/sync/subscription | Error 1236 (HY000): Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup…… | Error analysis The binlog retention period configured for the source database is short, so the binlog may have already been cleared when DTS pulls data, or the offset of the pulled binlong is incorret. Solution Check whether the binlog retention period ( expire_logs_days ) set in the source database meets the business requirements. We recommend that you set it to three days or above and create the task again. |
1414 | DDL statements for table structure change are executed in the source database in the data export stage. | Data migration | Error 1414: Table definition has changed, please retry transaction. | Error analysis You cannot execute DDL statements for table structure change in the source database in the data export stage; otherwise, an error may be reported. Solution Create the migration task again |
Error Description | Scenario | Error Message | Analysis and Solution |
There is a database connection exception. | Data migration/sync/subscription | {{}}invalid connection{{}}. driver: bad connection,{{*}} dial tcp {{*}}: connect: connection refused. | Error analysis 1. The source/target database is isolated or offline. 2. The source/target database fails to be restarted for a long time 3. The source/target database fails to implement the source-replica switch for a long time. 4. The source/target database is overloaded 5. The connections in the source/target database are killed manually or by programs. 6. There is a network connection failure caused by the source/target database's network security policy that denies the access request. Solution
Troubleshoot based on the above analysis and fix the problem one by one. For errors that occur on TencentDB instances, you can go to the product console and the Tencent Cloud Observability Platform to troubleshoot and fix problems. After that, execute the task again in the DTS console. If you cannot troubleshoot or fix the problem, submit a ticket for assistance |
There is a database connection exception. | Data migration/sync/subscription | dial tcp {{*}}: connect: connection refused. | Error analysis 1. The source/target database is isolated or offline. 2. The source/target database fails to be restarted for a long time 3. The source/target database fails to implement the source-replica switch for a long time. 4. The source/target database is overloaded 5. There is a network connection failure caused by the source/target database's network security policy that denies the access request. Solution
Troubleshoot based on the above analysis and fix the problem one by one. For errors that occur on TencentDB instances, you can go to the product console and the Tencent Cloud Observability Platform to troubleshoot and fix problems. After that, execute the task again in the DTS console. If you cannot troubleshoot or fix the problem, submit a ticket for assistance |
The table fails to be locked due to a slow SQL statement in the source database. | Data migration/sync | Find Resumable Error, src db has long query sql, fix it and try it later. Find Resumable Error: Task failed due to table lock failure caused by time-consuming SQL query statements in source instance. | Error analysis If the source database has long-running (over 5s) SQL statements, DTS needs to wait for these slow statements to complete before locking the table and exporting the data to avoid disrupting the source database business. The locking duration is set to 60s by default. If the locking operation exceeds this timeout, it will fail, resulting in an error in the task. Solution Handle the slow SQL statement in the source database or create the task again after the slow SQL statement is executed. |
The binlog parameter is incorrectly formatted. | Data migration/sync/subscription | Statement binlog format unsupported:{{xx}}. binlog must ROW format, but MIXED now. binlog row before/after image not full, missing column {{xx}}, binlog posistion:{{xx}}, gtid:{{*}}. | Error analysis To ensure the accuracy and integrity of data, DTS checks the source database binlog parameters in the task check stage, and reports an error for any ineligible check item to not start the task. If you modify a source database binlog parameter after the check is passed and the task is successfully started, the task will still report an error. Therefore, make sure the source database binlog meets the following requirements: binlog_format must be set to ROW .binlog_row_image must be set to FULL .Solution Correct the erroneous parameter as prompted or as instructed in Binlog Parameter Check, and create the task again. Note: You need to restart the thread for the parameter modification to take effect. After the database is restarted, the parameter configurations will be initialized, so you need to check whether they are correct after the restart. |
The built-in Kafka is abnormal. | Data subscription | kafka: error while consuming {{*}}. kafka: Failed to produce message to topic. | Error analysis When the built-in Kafka component of a DTS data subscription task produces or consumes data abnormally, the backend service will automatically retry and recover. If the exception occurs, refresh the page and observe the task status. Solution If the task status doesn't change over 10 minutes after you fresh the page, submit a ticket for assistance. |
The Kafka data expires because the task has been stopped for over seven days. | Data subscription | kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition. | Error analysis The Kafka data cached in the DTS task will expire if the task has been stopped or abnormal for over seven days, so the Kafka data cannot be read successfully. Solution Terminate the task and create a new one. You can create a monthly subscribed task by resetting it. |
Was this page helpful?