Directions
Creating log collection rules
1. Log in to the TKE console and choose Log Management > Log Rules in the left sidebar. 2. At the top of the Log Rules page, select a region and a cluster for which you want to configure log collection rules and click Create, as shown below: 3. On the Create Log Collecting Policy page, configure the consumer of logs: Set Type to CLS in the Consumer end area, as shown in the figure below: Rule name: You can customize the log collection rule name.
Log region: CLS supports cross-region log shipping. You can click Modify to select the destination region for log shipping.
Logset: Created logsets are displayed by log region. If the existing logsets are not suitable, you can create a new one in the CLS console. For operation details, see Creating logset. Log topic: Select the corresponding log topic under the logset. Two modes are supported: Auto-create log topic and Select existing log topic.
Advanced settings:
Default metadata: CLS sets the metadata pod_name
, namespace
, and container_name
as indexes for log searches by default.
Custom metadata: You can customize metadata and log indexes.
Note
CLS does not support cross-region log shipping (from the Chinese mainland to outside the Chinese mainland and vice versa). For regions where CLS is not activated, logs can be shipped only to the nearest regions. For example, the container logs collected from a Shenzhen cluster can be shipped only to Guangzhou, and the containers logs collected from a Tianjin cluster can be shipped only to Beijing. You can find more information in the console.
Currently, a log topic supports the collection configuration of only one type of logs. That is, the log, audit, and event types cannot use the same topic. If they use the same topic, logs will be overwritten. Ensure that the selected log topic is not occupied by other collection configurations. A logset can contain up to 500 log topics.
Custom metadata and metadata indexes cannot be modified once created. You can go to the CLS console to modify the configuration. 4. Select a collection type and configure a log source. Supported collection types are Container standard output, Container file path, and Node file path.
Container Standard Output Logs
Log source supports All containers, Specify workload, and Specify Pod Labels, as shown below: Log source supports Specify workload and Specify Pod Labels.
You can specify a file path or use wildcards for the collection path. For example, when the container file path is /opt/logs/*.log
, you can specify the collection path as /opt/logs
and the file name as *.log
.
Note
For Container file path, the corresponding path cannot be a soft link or hard link. Otherwise, the actual path of the soft link will not exist in the collector's container, resulting in log collection failure.
You can specify a file path or use wildcards. For example, when the container file paths for collection are /opt/logs/service1/*.log
and /opt/logs/service2/*.log
, you can specify the folder of the collection path as /opt/logs/service*
and the file name as *.log
.
You can attach metadata in the key-value pair format to log records as needed. Note:
For Node file path, the corresponding path cannot be a soft link or hard link. Otherwise, the actual path of the soft link will not exist in the collector, resulting in log collection failure.
Each node log file can be collected to only one log topic.
Note
For Container standard output and Container file path (excluding Node file path/not mounted in hostPath), besides the original log content, the metadata related to the container or Kubernetes (such as the ID of the container that generated the logs) will also be reported to the CLS. Therefore, when viewing logs, users can trace the log source or search based on the container identifier or characteristics (such as container name and labels).
The metadata related to the container or Kubernetes is shown in the table below:
|
container_id | ID of the container to which the log belongs |
container_name | Name of the container to which the log belongs |
image_name | Image name IP of the container to which the log belongs |
namespace | Namespace of the Pod to which the log belongs |
pod_uid | UID of the Pod to which the log belongs |
pod_name | Name of the Pod to which the log belongs |
pod_lable_{label name} | Labels of the Pod to which the log belongs (for example, if a Pod has two labels: app=nginx and env=prod , the reported log will have two metadata entries attached: pod_label_app:nginx and pod_label_env:prod ) |
5. Configure the collection policy as Full or Incremental.
Full: Collecting logs from the beginning of the log file.
Incremental: Collecting logs 1 MB ahead of the end of the log file. For a log file less than 1 MB in size, incremental collection is equivalent to full collection.
6. Click Next and select a log parsing method, as shown below: Encoding Mode: Supports UTF-8 and GBK.
Extraction Mode: Supports multiple extraction modes, as described below:
|
Full text in a single line | A log contains only one line of content, and the line break \\n marks the end of a log. Each log will be parsed into a complete string with CONTENT as the key. When log indexing is enabled, you can search for log content via full-text search. The time attribute of a log is determined by the collection time. | |
Full text in multi lines | A log with full text in multi lines spans multiple lines, and a first-line regular expression is used for matching. When a log in a line matches the preset regular expression, the line is considered as the beginning of a log, and the next matching line will be the end mark of the log. A default key, CONTENT , will be set as well. The time attribute of a log is determined by the collection time. The regular expression can be generated automatically. | |
Single line - full regex | The single-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete log. When configuring the single-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | |
Multiple lines - full regex | The multi-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete log that spans multiple lines in a log text file (such as Java program logs) based on a regular expression. When configuring the multi-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | |
JSON | A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break \\n . | |
Separator | Logs in this format are structured with the specified separator, and each complete log ends with a line break \\n . You need to define a unique key for each separate field. Leave the field blank if you don’t want to collect it. At least one field is required. | |
Filter: LogListener collects only logs that meet the filter rules. Key supports exact matching, and Filter Rule supports regular expression matching. For example, you can specify to collect only logs where ErrorCode
is 404
. You can enable the filter feature and configure rules as needed.
Note
Currently, one log topic supports only one collection configuration. Ensure that all container logs that adopt the log topic can accept the log parsing method that you choose. If you create different collection configurations under the same log topic, the earlier collection configurations will be overwritten.
7. Click Done.
Updating the log rules
1. Log in to the TKE console and choose Log Management > Log Rules in the left sidebar. 2. At the top of the Log Rules page, select the region and the cluster where you want to update the log collection rules and click Edit Collecting Rule at the right, as shown in the figure below: 3. Update the configuration as needed and click Done.
Note
The logset and log topic of a created rule cannot be modified.
References
Apakah halaman ini membantu?