Issue
In data subscription scenarios, if you use your own consumer to consume data, you may encounter the following exceptions:
1. Data cannot be consumed.
2. The consumed data is either lost or duplicated.
3. The consumer delay keeps increasing.
Troubleshooting
1. Data cannot be consumed
If your own consumer fails to consume data, use the demo provided by DTS for consumption test first.
If the demo can normally consume data, you need to check your own consumer.
If the demo also cannot consume data, further troubleshoot as follows:
Check the network environment of the consumer. The consumer must be in the Tencent Cloud private network and in the same region as where the DTS data subscription task is.
Check whether the demo starting parameters are correct, especially the consumer group password.
Check whether the demo version is correct. The required demo version varies by source database type and data format.
Check the number of unconsumed messages on the consumer group management page in the console to see if the subscription task has written data to Kafka.
2. The consumed data is either lost or duplicated
When a data subscription task is restarted, data duplication may occur in the producer, causing the duplication of the consumed data. This scenario rarely occurs. Data duplication or data loss in other scenarios are not supposed to occur. Generally, data duplication or data loss is caused by exceptions in your own consumer. To troubleshoot, reproduce the problem first. You can choose one of the following two methods to reproduce the problem:
In the console, change the Kafka consumption offset back to the previous offset and consume data again.
Create a consumer group and use it to consume data again. Consumption in different consumer groups does not affect each other.
If the problem can be reproduced, submit a ticket to handle it; otherwise, check whether your own consumer is abnormal. 3. The consumer delay keeps increasing
1. The commit logic of the consumer has been modified.
If the consumer only consumes data but doesn't commit the consumption offset, the offset in Kafka won't be updated. The default commit logic of the DTS demo is that, every time a checkpoint message is consumed, the consumption offset will be committed. The subscription service writes a checkpoint message about every 10 seconds. If you modify the commit rule, the consumer delay may keep increasing. To solve this problem, check the commit rule of your consumer first.
2. The consumption efficiency is too low.
The consumption efficiency may be affected by the network condition, the processing efficiency of the consumer, concurrent consumption, or consumption in multiple partitions. You can create a consumer group and compare the consumption efficiency of the DTS demo with that of your own consumer to find the cause of the low efficiency. You can also check the network condition to improve the data processing speed, or increase the number of consumers to implement concurrent consumption for topics in multiple partitions.
Was this page helpful?