The architecture of TDMQ for CKafka is as follows:
A producer could be a source of messages generated from web activities, service logs, and other similar information. It publishes messages to TDMQ CKafka's broker cluster using a push model.
The cluster uses Zookeeper to manage cluster configurations, perform leader elections, and handle fault tolerance.
Consumers are divided into several consumer groups. A consumer consumes messages from the broker using a pull mode.
For the advantages of TDMQ for CKafka over Apache Kafka, see Strengths. High throughput
In TDMQ for CKafka, a huge amount of network data is permanently stored in disks and high numbers of disk files are sent over the network. The performance of this process directly impacts TDMQ for CKafka's overall throughput and is achieved through the following methods:
Efficient disk usage: Data is read and written sequentially on the disk to improve disk utilization.
Writing messages: Messages are written to the page cache and then flushed to disk by asynchronous threads.
Reading messages: Messages are sent directly from the page cache to the socket.
When the required data is not found in the page cache, disk I/O occurs, loading the messages from the disk into the page cache and then sending them directly from the socket.
Broker's zero copy mechanism: the sendfile system is called to send data directly from the page cache to the network.
Reducing network overhead
Data compression reduces the network load.
Batch processing mechanism: The producer writes data to the broker in batch, and the consumer pulls data from the broker in batch.
Data persistence
Data persistence is mainly implemented in TDMQ for CKafka through the following principles:
Partition storage distribution in topics
In TDMQ for CKafka's file storage system, a single topic can have multiple different partitions. Each partition corresponds to a physical folder that stores the messages and index files for that partition. For example, if two topics are created where topic 1 has 5 partitions and topic 2 has 10 partitions, then a total of 5 + 10 = 15 folders will be generated in the cluster.
File storage method in partition
A partition is physically composed of multiple segments of equal size. These segments are read and written sequentially, allowing for the quick deletion of expired segments and improving disk utilization.
Scale out
One topic can include multiple partitions distributed in one or more brokers.
One consumer can subscribe to one or more partitions.
The producer is responsible for evenly distributing messages to the corresponding partitions.
Messages in partitions are sequential.
Consumer group
TDMQ for CKafka does not delete consumed messages.
Any consumer must belong to a group.
Multiple consumers within the same consumer group do not consume the same partition simultaneously.
Different groups can consume the same message simultaneously, supporting both queue and publish-subscribe models.
Multiple replicas
The multi-replica design can enhance the system availability and reliability. Replicas are evenly distributed across the entire cluster using the following algorithm:
1. Sort all brokers (assuming there are n brokers) and the partitions to be allocated.
2. Assign the i-th partition to the (i mod n)-th broker.
3. Assign the j-th replica of the i-th partition to the ((i + j) mod n)-th broker.
Leader election mechanism
TDMQ for CKafka dynamically maintains a set of in-sync replicas (ISR) in ZooKeeper. All replicas in the ISR have caught up with the leader. Only members of the ISR can be elected as leaders.
If there are f + 1 replicas in ISR, a partition can tolerate the failure of f replicas while ensuring that committed messages are not lost.
If there are a total of 2f + 1 replicas (including the leader and followers), it is necessary to ensure that at least f + 1 replicas have successfully replicated the messages before the commit operation. To ensure the correct election of a new leader, no more than f replicas can fail.
Was this page helpful?