You can write an SCF function to process messages received in the specific CKafka instance. The SCF backend can consume the messages in CKafka as a consumer and pass them to the function.
Characteristics of CKafka triggers:
Note:
- For execution errors (including user code errors and runtime errors), the CKafka trigger will retry according to the configured retry times, which is 10,000 by default.
- For system errors, the CKafka trigger will continue to retry in an exponential backoff manner until it succeeds.
CKafka does not push messages actively. The consumer needs to pull messages and consume them. Therefore, if a CKafka trigger is configured, the SCF backend will launch a CKafka consumption module as the consumer to create an independent consumer group in CKafka for message consumption.
After consuming messages, the SCF backend consumption module will encapsulate them into event structures according to the timeout period, accumulated messages, and maximum messages and then initiate function invocation (sync invocation). Applicable limits are as follows:
The consumption module on the backend of SCF will loop this process and ensure the order of message consumption, that is, the next batch of messages will be consumed only after the previous batch is completely consumed (sync invocation).
Note:
- In this process, the number of encapsulated messages is different in each event structure, which ranges from 1 to the maximum number. If the maximum number of messages is too high, there may be cases where the number of messages in an event structure will never reach the maximum number.
- After the event content is obtained by the function, each message can be guaranteed for processing by loop handling, and it should not be assumed that the number of messages passed each time is constant.
When the specified CKafka topic receives a message, the backend consumption module of SCF will consume the message and encapsulate it into an event in JSON format like the one below, which will trigger the bound function and pass the data content as input parameters to the function.
{
"Records": [
{
"Ckafka": {
"topic": "test-topic",
"Partition":1,
"offset":36,
"msgKey": "None",
"msgBody": "Hello from Ckafka!"
}
},
{
"Ckafka": {
"topic": "test-topic",
"Partition":1,
"offset":37,
"msgKey": "None",
"msgBody": "Hello from Ckafka again!"
}
}
]
}
The data structures are as detailed below:
Structure | Description |
---|---|
Records | List structure. There may be multiple messages merged in the list |
Ckafka | Identifies the event source as CKafka |
topic | Message source topic |
partition | Partition ID of message source |
offset | Consumption offset number |
msgKey | Message key |
msgBody | Message content |
If a CKafka trigger is configured, the SCF backend will launch a CKafka consumption module as the consumer to create an independent consumer group in CKafka for message consumption. In addition, the number of consumption modules is equal to the number of partitions in the CKafka topic.
If a lot of CKafka messages heap up, you need to increase the consumption capability in the following ways:
Was this page helpful?