TDMQ for CKafka is widely used in big data scenarios, such as webpage tracking, behavior analysis, log aggregation, monitoring, streaming data processing, and online and offline data analysis.
You can simplify data integration in the following ways:
Import messages from TDMQ for CKafka into COS, Stream Compute Service, and other data warehouses.
Connect with other Tencent cloud products using Serverless Cloud Functions triggers.
Web Tracking
TDMQ for CKafka can process website activities such as PV, search, and other user behaviors in real time and then publishes them to topics by type. These information flows can be used for real-time monitoring or offline statistical analysis.
As a large amount of activity information is generated in each user's page views, website activity tracking requires a very high throughput. TDMQ for CKafka can meet the requirements of high throughput and offline processing.
Log aggregation
The low-latency processing capability of TDMQ for CKafka makes it easier to process (consume) distributed data from multiple data sources. Under the same performance conditions, TDMQ for CKafka provides more durable persistent storage and lower end-to-end latency than a centralized data aggregation system.
The above features make TDMQ for CKafka clusters an ideal log collection center. Multiple servers and applications can asynchronously send operation logs in batches to TDMQ for CKafka clusters instead of saving them locally or in a database. TDMQ for CKafka clusters can submit and compress messages in batches, making the performance overhead almost negligible for producers. Uers can use systematic storage and analysis systems such as Hadoop to analyze the pulled logs.
Big data
In some big data scenarios, a large amount of concurrent data needs to be processed and aggregated. This requires clusters to have excellent processing performance and high scalability. Moreover, TDMQ for CKafka clusters’ data distribution mechanism, in terms of disk space allocation, message format processing, server selection, and data compression, also makes them suitable for handling high numbers of real-time messages and aggregating distributed application data, which facilitates system OPS.
TDMQ for CKafka clusters can better aggregate, process, and analyze offline and streaming data.
User Link Observation
In a typical microservices architecture, the system consists of multiple independent services. Each service generates a large number of monitoring data (for example, CPU and memory usage), log data (for example, request logs and error logs), and Trace data (for example, service invocation links). To achieve comprehensive observability of the system, these data can be uniformly collected and sent to Kafka. Downstream consumes these data in real time through Flink Stream for aggregation, analysis, and anomaly detection, helping the Ops team quickly detect and troubleshoot problems, enhancing system stability and maintainability.
IoT Data Collection and Distribution
In an IoT scene, devices deliver device data to CKafka via the MQTT protocol and are distributed to different systems by the rule engine for further processing. For example, vehicles collect various information through sensors and controllers installed on them, such as vehicle location, speed, fuel level, engine status, etc. This information needs to be transmitted to the car manufacturer's server in real time or periodically so that data analysis, failure alarming, remote control and other operations can be performed.
Terminal devices access the MQTT Version of the Message Queue via the MQTT protocol and connect to the CKafka cluster through the rule engine to achieve data forwarding.
IoV service platform, high-precision map service, location service and other IoV related applications can directly consume by subscribing to CKafka data. At the same time, two-way communication of vehicle control (remote control) messages can be realized through the MQTT Version of the Message Queue.
Serverless Cloud Functions triggers
TDMQ for CKafka can be used as SCF triggers, and when a message is received, a function can be triggered and the message will be passed to the function as event content. For example, when CKafka triggers a function, the function can transform the message structure, filter the message contents, or deliver the message to Elasticsearch Service (ES).