tencent cloud

All product documents
TDMQ for CKafka
DocumentationTDMQ for CKafkaDevelopment GuideCommon Parameter Configuration Description of CKafka
Common Parameter Configuration Description of CKafka
Last updated: 2025-04-09 10:24:20
Common Parameter Configuration Description of CKafka
Last updated: 2025-04-09 10:24:20

Broker Configuration Parameter Description

Current configurations on the CKafka broker side are as follows for reference:
# The maximum size of the message body, in bytes
message.max.bytes=1000012

# Whether to allow automatic creation of Topic, default is false, currently can be managed and operated through the console or TencentCloud API creation
auto.create.topics.enable=false

# Whether to allow calling an API to delete a Topic
delete.topic.enable=true

# The maximum request size allowed by the Broker is 16 MB.
socket.request.max.bytes=16777216

# Each IP can establish up to 5000 connections with the Broker.
max.connections.per.ip=5000

# offset retention time, default is 7 days.
offsets.retention.minutes=10080

# If there is no ACL setting, anyone is allowed to access.
allow.everyone.if.no.acl.found=true

# Shard size of logs is 1 GB.
log.segment.bytes=1073741824

# Log rolling check interval is 5 minutes. You may need to wait for 5 minutes to clear the log when the set retention time is less than 5 minutes.
log.retention.check.interval.ms=300000
Description
Other unlisted Broker configurations refer to default configuration of open source Kafka.

Configuration Parameter Description of Topic

Select a Suitable Number of Partitions

From the perspective of producers, writing to different partitions is fully parallel; from the perspective of consumers, the number of concurrencies completely depends on the number of partitions (if the number of consumers is greater than the number of partitions, there must be idle consumers). Therefore, selecting a suitable number of partitions is very important for enhancing the performance of CKafka instances.
The number of partitions needs to be determined according to the throughput of production and consumption. Ideally, the number of partitions can be determined by the following formula:
Num = max( T/PT , T/CT ) = T / min( PT , CT )
Among them, Num represents the number of partitions, T represents the target throughput, PT represents the maximum throughput of producers writing to one partition, and CT represents the maximum throughput of consumers consuming from one partition. Then the number of partitions should equal the larger one of T/PT and T/CT.
In the actual situation, the influencing factors of the maximum throughput PT of producers writing to one partition include the scale of batch processing, compression algorithm, acknowledgement mechanism, number of replicas, etc. The influencing factors of the maximum throughput CT of consumers consuming from one partition are related to business logic and need to be obtained through actual tests in different scenarios.
It is usually recommended that the number of partitions be equal to or greater than the number of consumers to achieve maximum concurrency. If the number of consumers is 5, the number of partitions should also be ≥ 5. Meanwhile, too many partitions can cause a reduction in production throughput and an increase in election time. Therefore, it is not recommended to have too many partitions. The following information is provided for reference:
One partition can achieve sequential writing of messages.
One partition can only be consumed by one consumer process of the same consumer group.
One consumer process can consume multiple partitions simultaneously, that is, the partition limits the concurrent capability of the consumer end.
The more partitions there are, the longer the time taken for leader election after a failure.
The finest granularity of the offset is at the partition level. The more partitions there are, the more time-consuming it is to query the offset.
The number of partitions can be dynamically increased. It can only increase and cannot be decreased. However, an increase will result in message rebalancing.

1. Select a Suitable Replica

Currently, the number of replicas must be greater than or equal to 2 to ensure availability. If necessary, it is recommended to have 3 replicas for high reliability.
Note
The number of replicas will impact production/consumption flow. For example, if there are 3 replicas, the actual traffic = production flow × 3.

1. Log Retention Time

The log.retention.ms configuration of the Topic is set uniformly through the retention time of the instance via console.

1. Other Topic Level Configuration Instructions

# Max Message Size at Topic Level
max.message.bytes=1000012

# The message format of version 0.10.2 is V1 format.
message.format.version=0.10.2-IV0

# A replica not in the ISR can be selected as the Leader. Availability is higher than reliability, and there is a data loss risk.
unclean.leader.election.enable=true

# Minimum number of replicas for the ISR to submit producer requests. If the number of replicas in sync status is less than this value, the server will no longer accept write requests with request.required.acks being -1 or all.
min.insync.replicas=1

Producer Configuration Guide

Common parameters on the production side are configured as follows. It is recommended that customers adjust the configuration according to actual business scenes:
# The producer will attempt to package messages sent to the same Partition into a single batch and send them to the Broker. batch.size sets the upper limit of the batch size. The default is 16KB. Setting batch.size too small will cause throughput to decrease, while setting it too large will cause excessive memory usage.
batch.size=16384

# There are 3 mechanisms for the ack of Kafka producer, as follows:
# -1 or all: The Broker responds to the Producer to continue sending the next (batch of) message(s) only after the leader receives the data and synchronizes it with all followers in ISRs. This configuration provides the highest data reliability. No message will be lost as long as a synchronized replica is alive. Note: This configuration cannot ensure that all replicas read and write the data before returning. It can be used in conjunction with the Topic Level parameter min.insync.replicas.
# 0: The producer does not wait for broker confirmation of synchronization completion and continues sending the next (batch of) message(s). This configuration provides the highest production performance, but the lowest data reliability (data may be lost when the server fails. If the leader is dead but the producer is unaware of that, the broker does not receive the message).
# 1: The producer sends the next (batch of) message(s) after the leader has successfully received the data and confirmed it. This configuration is a trade-off between production throughput and data reliability (messages may be lost if the leader is dead but not yet replicated).

# The default value is 1 if the user does not configure explicitly. Set according to their business situation.
acks=1

# Control the maximum time for a production request to wait in the Broker for replica synchronization to meet the conditions set by acks
timeout.ms=30000

# Configure the memory used by the producer to cache messages waiting to be sent to the Broker. Users should adjust it according to the total memory size of the producer's process.
buffer.memory=33554432

# When the speed of producing messages is faster than the Sender thread sending to the Broker, causing the memory configured by buffer.memory to be used up, it will block the producer's send operation. This parameter sets the maximum blocking time.
max.block.ms=60000

# Set the time (ms) for delayed message sending, so that more messages can be composed into a batch for sending. The default value is 0, which means send immediately. When the messages to be sent reach the size set by batch.size, the request will be sent immediately regardless of whether the time set by linger.ms has been reached.
# It is recommended that users set linger.ms between 100 and 1000 according to actual use scenes. A larger value relatively has larger throughput but will correspondingly increase latency.
linger.ms=100

# Set the amount of cached messages in the partition (bytes). When this value is reached, the producer will send batched messages to the broker. The default value is 16384. A too-small batch.size will increase the request count, which may degrade performance and impact stability. Users can appropriately increase this value according to the actual scenario. Note: This value is the upper limit. If the time has reached linger.ms before reaching this value, the producer will send the message.
batch.size=16384

# The upper limit of the request packet size that the producer can send. The default is 1 MB. Note that when modifying this value, it must not exceed the packet size upper limit of 16 MB configured by the Broker.
max.request.size=1048576

# Compression format configuration. Currently, compression is not allowed to use in versions 0.9 and below. GZip compression is not allowed to use in versions 0.10 and above.
compression.type=[none, snappy, lz4]

# The request timeout for the client to send to the Broker. It cannot be less than the replica.lag.time.max.ms configured by the Broker. Currently, this value is 10000 ms.
request.timeout.ms=30000

# The maximum number of unacknowledged requests that the client can send on each connection. This parameter may cause data out of order when it is greater than 1 and retries is greater than 0. When you hope the messages are strictly ordered, it is recommended that customers set this value to 1.
max.in.flight.requests.per.connection=5


# Number of retries when an error occurs in a request. It is recommended to set this value to more than 0 to maximize message delivery during failed retries.
retries=0

# Time between failed request transmission and next retry request
retry.backoff.ms=100

Consumer Configuration Guide

Common parameters on the consumer side are configured as follows. It is recommended that customers adjust the configuration according to actual business scenes:
# Whether to synchronize the offset to the Broker after consuming a message, so that the latest offset can be obtained from the Broker when the Consumer fails
enable.auto.commit=true

# The sampling interval for automatically committing the Offset when auto.commit.enable=true. It is recommended to set it to at least 1000.
auto.commit.interval.ms=5000

# How to initialize the offset when there is no offset on the Broker (such as during the first consumption or when the offset expires after 7 days). How to reset the Offset when the OFFSET_OUT_OF_RANGE error is received.
# earliest: means automatic reset to the minimum offset of the partition
# latest: The default is latest, which means automatic reset to the maximum offset of the partition.
# none: Do not automatically reset the offset. Throw an OffsetOutOfRangeException.
auto.offset.reset=latest

# Identify the consumption group to which the consumer belongs
group.id=""

# Consumer timeout period when using Kafka consumer grouping mechanism. When the Broker fails to receive the consumer's heartbeat within this period, the consumer is considered to have failed, and Broker initiates Rebalance process. Currently, this value configuration must be between group.min.session.timeout.ms=6000 and group.max.session.timeout.ms=300000 in the Broker configuration.
session.timeout.ms=10000

# Interval for consumers to send heartbeats when using Kafka consumer grouping mechanism. This value must be less than session.timeout.ms, generally less than one-third of it.
heartbeat.interval.ms=3000

# The maximum interval allowed for subsequent calls to poll when using Kafka consumer grouping mechanism. If poll is not called again within this time, the consumer is considered to have failed, and Broker will initiate Rebalance to assign the partition assigned to it to other consumers.
max.poll.interval.ms=300000

# Minimum size of returned data for a fetch request. By default, it is set to 1B, indicating that the request can return as soon as possible. Increasing this value will increase throughput and also increase delay.
fetch.min.bytes=1

# The maximum size of returned data for a Fetch request. By default, it is set to 50 MB.
fetch.max.bytes=52428800

# Request waiting time for a Fetch request
fetch.max.wait.ms=500

# The maximum size of data returned by each partition for a Fetch request. The default is 1 MB.
max.partition.fetch.bytes=1048576

# Number of records returned in one poll call
max.poll.records=500

# Client request timeout. If no response is received after this period, the request times out and fails.
request.timeout.ms=305000


Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback

Contact Us

Contact our sales team or business advisors to help your business.

Technical Support

Open a ticket if you're looking for further assistance. Our Ticket is 7x24 available.

7x24 Phone Support
Hong Kong, China
+852 800 906 020 (Toll Free)
United States
+1 844 606 0804 (Toll Free)
United Kingdom
+44 808 196 4551 (Toll Free)
Canada
+1 888 605 7930 (Toll Free)
Australia
+61 1300 986 386 (Toll Free)
EdgeOne hotline
+852 300 80699
More local hotlines coming soon