topicId
).
For more information, see Managing Log Topic.CLS_HOST
) of the region of your log topic.
For details of the CLS domain name list, see Available Regions.TmpSecretId
) and API key (TmpSecretKey
) required for CLS authentication.
To obtain the API key and API key ID, go to Manage API Key.wget
command to download the LogConfig.yaml
CRD declaration file, using the master node path /usr/local/
as an example.wget https://mirrors.tencent.com/install/cls/k8s/LogConfig.yaml
LogConfig.yaml
declaration file consists of the following two parts:clsDetail
: The configuration for shipping to CLS.inputDetail
: Log source configuration.apiVersion: cls.cloud.tencent.com/v1kind: LogConfig ## Default valuemetadata:name: test ## CRD resource name, which is unique in the cluster.spec:clsDetail: ## The configuration for shipping to CLS...inputDetail: ## Log source configuration...
clsDetail:# You need to specify the logset and topic names to automatically create a log topic, which cannot be modified after being defined.logsetName: test ## The name of the CLS logset. If there is no logset with this name, one will be created automatically. If there is such a logset, a log topic will be created under it.topicName: test ## The name of the CLS log topic. If there is no log topic with this name, one will be created automatically.# Select an existing logset and log topic. If the logset is specified but the log topic is not, a log topic will be created automatically, which cannot be modified after being defined.logsetId: xxxxxx-xx-xx-xx-xxxxxxxx ## The ID of the CLS logset. The logset needs to be created in advance in CLS.topicId: xxxxxx-xx-xx-xx-xxxxxxxx ## The ID of the CLS log topic. The log topic needs to be created in advance in CLS and not occupied by other collection configurations.region: ap-xxx ## Topic region for cross-region shipping# Define the log topic configuration when a log topic is created automatically. The configuration cannot be modified after being defined.period: 30 ## Lifecycle in days. Value range: 1–3600. `3640` indicates permanent storage.storageType: hot. ## Log topic storage class. Valid values: `hot` (STANDARD); `cold` (STANDARD_IA). Default value: `hot`.HotPeriod: 7 ## Transition cycle in days. Value range: 1–3600. It is valid only if `storageType` is `hot`.partitionCount: ## The number (an integer) of log topic partitions. Default value: `1`. Maximum value: `10`.autoSplit: true ## Whether to enable auto-split (Boolen type). Default value: `true`.maxSplitPartitions: 10 ## The maximum number (an integer) of partitionstags: ## Tag description list. This parameter is used to bind a tag to a log topic. Up to nine tag key-value pairs are supported, and a resource can be bound to only one tag key.- key: xxx ## Tag keyvalue: xxx ## Tag value# Define collection ruleslogType: json_log ## Log parsing format. Valid values: `json_log` (JSON); `delimiter_log` (separator); `minimalist_log` (full text in a single line); `multiline_log` (full text in multi lines); `fullregex_log` (single line - full regex); `multiline_fullregex_log` (multiple lines - full regex). Default value: `minimalist_log`.logFormat: xxx ## Log formatting methodexcludePaths: ## Collection path blocklist- type: File ## Type. Valid values: `File`, `Path`.value: /xx/xx/xx/xx.log ## The value of `type`userDefineRule: xxxxxx ## Custom collection rule, which is a serialized JSON string.extractRule: {} ## Extraction and filter rule. If `ExtractRule` is set, `LogType` must be set. For more information, see the extractRule description.AdvancedConfig: ## Advanced collection configurationMaxDepth: 1 ## Maximum number of directory levelsFileTimeout: 60 ## File timeout attribute# Define index configuration, which cannot be modified then.indexs: ## You can customize the index method and field when creating a topic.- indexName: ## The field for which to configure the key value or meta field index. You don't need to add the `__TAG__.` prefix to the key of the meta field and can just use that of the corresponding field when uploading a log, as the `__TAG__.` prefix will be automatically added for display in the Tencent Cloud console.indexType: ## Field type. Valid values: `long`, `text`, `double`.tokenizer: ## Field delimiter. Each character represents a delimiter. Only English symbols and \\n\\t\\r are supported. For `long` and `double` fields, leave it empty. For `text` fields, we recommend that you use @&?|#()='",;:<>[]{}/ \\n\\t\\r\\ as the delimiter.sqlFlag: ## Whether the analysis feature is enabled for the field (Boolen)containZH: ## Whether Chinese characters are contained (Boolen)
Name | Type | Required | Description |
timeKey | String | No | The specified field in the log to be used as the log timestamp. If the configuration is empty, the actual log collection time will be used. time_key and time_format must appear in pairs. |
timeFormat | String | No | Time field format. For more information, see the output parameters of the time format description of the strftime function in C programming language. |
delimiter | String | No | The delimiter for delimited logs, which is valid only if log_type is delimiter_log . |
logRegex | String | No | Full log matching rule, which is valid only if log_type is fullregex_log . |
beginningRegex | String | No | First-line matching rule, which is valid only if log_type is multiline_log or multiline_fullregex_log . |
unMatchUpload | String | No | Whether to upload the logs failed to be parsed. Valid values: true (yes); false (no). |
unMatchedKey | String | No | The key of the log failed to be parsed. |
backtracking | String | No | The size of the data to be rewound in incremental collection mode. Valid values: -1 (full collection); 0 (incremental collection). Default value: -1 . |
keys | Array of String | No | The key name of each extracted field. An empty key indicates to discard the field. This parameter is valid only if log_type is delimiter_log , fullregex_log , or multiline_fullregex_log . json_log logs use the key of JSON itself. |
filterKeys | Array of String | No | Log keys to be filtered, which correspond to FilterRegex by subscript. |
filterRegex | Array of String | No | The regex of the log keys to be filtered, which corresponds to FilterKeys by subscript. |
isGBK | String | No | Whether it is GBK-encoded. Valid values: 0 (no); 1 (yes).Note: This field may return null, indicating that no valid values can be obtained. |
\\n
to mark the end of a log. For easier structural management, a default key value \\_\\_CONTENT\\_\\_
is given to each log, but the log data itself will no longer be structured, nor will the log field be extracted. The time attribute of a log is determined by the collection time.Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Single-line loglogType: minimalist_log
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
\\n
cannot be used to mark the end of a log. To help CLS distinguish between logs, a first-line regular expression is used for matching. When a line of a log matches the preset regular expression, it is considered as the beginning of the log, and the log ends before the next matching line.\\_\\_CONTENT\\_\\_
is also set, but the log data itself is not structured, and no log fields are extracted. The time attribute of a log is determined by the collection time.2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:java.lang.NullPointerExceptionat com.test.logging.FooFactory.createFoo(FooFactory.java:15)at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multi-line loglogType: multiline_logextractRule:# Only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\\n` to the end of the current log.beginningRegex: \\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2},\\d{3}\\s.+
__CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\\njava.lang.NullPointerException\\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Full RegexlogType: fullregex_logextractRule:# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: (\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*beginningRegex: (\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys: ['remote_addr','time_local','request_method','request_url','http_protocol','http_host','status','request_length','body_bytes_sent','http_referer','http_user_agent','request_time','upstream_response_time']
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multiple lines - full regexlogType: multiline_fullregex_logextractRule:# The first-line full regular expression: only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\\n` to the end of the current log.beginningRegex: \\[\\d+-\\d+-\\w+:\\d+:\\d+,\\d+\\]\\s\\[\\w+\\]\\s.*# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: \\[(\\d+-\\d+-\\w+:\\d+:\\d+,\\d+)\\]\\s\\[(\\w+)\\]\\s(.*)# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys: ['time','level','msg']
time: 2018-10-01T10:30:01,000`level: INFO`msg: java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\\n
.{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# JSON loglogType: json_log
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
\\n
. When CLS processes separator logs, you need to define a unique key for each separate field.10.20.20.10 ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Separator loglogType: delimiter_logextractRule:# Separatordelimiter: ':::'# List of extracted keys, which are in one-to-one correspondence to the separated fieldskeys: ['IP','time','request','host','status','length','bytes','referer']
IP: 10.20.20.10bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
inputDetail:type: container_stdout ## Log collection type. Valid values: `container_stdout` (container standard output); `container_file` (container file); `host_file` (host file).containerStdout: ## Container standard output configuration, which is valid only if `type` is `container_stdout`.namespace: default ## The Kubernetes namespace of the container to be collected. Separate multiple namespaces by `,`, for example, `default,namespace`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `excludeNamespace` is specified.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by `,`, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## The namespace label for filtering namespacesallContainers: false ## Whether to collect the standard output of all containers in the specified namespace. Note that if `allContainers=true`, you cannot specify `workload`, `includeLabels`, and `excludeLabels` at the same time.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).container: xxx ## The name of the container to be or not to be collectedincludeLabels: ## The labels of the Pods to be collected. This field cannot be specified if `workload` is specified.key: value1 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate multiple values by comma. If `excludeLabels` is also specified, Pods in the intersection will be matched.excludeLabels: ## The labels of the Pods to be excluded. This field cannot be specified if `workload`, `namespace`, and `excludeNamespace` are specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If `includeLabels` is also specified, Pods in the intersection will be matched.metadataLabels: ## The Pod labels to be collected as metadata. If this field is not specified, all Pod labels will be collected as metadata.- label1metadataContainer: ## The container environments of the metadata to be collected. If this field is not specified, metadata (`namespace`, `pod_name`, `pod_ip`, `pod_uid`, `container_id`, `container_name`, and `image_name`) of all container environments will be collected.- namespacecustomLabels: ## Custom metadatalabel: l1workloads: ## The workloads of the specified workload types in the specified namespaces of the containers of the logs to be collected- container: xxx ## The name of the container to be collected. If this field is not specified, all containers in the workload Pod will be collected.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).kind: deployment ## Workload type. Valid values: `deployment`, `daemonset`, `statefulset`, `job`, `cronjob`.name: sample-app ## Workload namenamespace: prod ## Workload namespacecontainerFile: ## Container file configuration, which is valid only if `type` is `container_file`.namespace: default ## The Kubernetes namespace of the container to be collected. You must specify a namespace.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by `,`, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## The namespace label for filtering namespacescontainerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).container: xxx ## The name of the container to be collected. If it is `*`, it indicates the names of all containers to be collected.logPath: /var/logs ## Log folder. Wildcards are not supported.filePattern: app_*.log ## Log filename. Wildcards `*` and `?` are supported. `*` indicates to match any number of characters, while `?` indicates to match any single character.includeLabels: ## The labels of the Pods to be collected. This field cannot be specified if `workload` is specified.key: value1 ## The `metadata` will be carried in the log collected based on the collection rule and reported to the consumer. Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate values by comma. If `excludeLabels` is also specified, Pods in the intersection will be matched.excludeLabels: ## Pods with the specified labels will be excluded. This field cannot be specified if `workload` is specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If `includeLabels` is also specified, Pods in the intersection will be matched.metadataLabels: ## The Pod labels to be collected as metadata. If this field is not specified, all Pod labels will be collected as metadata.- namespacemetadataContainer: ## The container environments of the metadata to be collected. If this field is not specified, metadata (`namespace`, `pod_name`, `pod_ip`, `pod_uid`, `container_id`, `container_name`, and `image_name`) of all container environments will be collected.customLabels: ## Custom metadatakey: valueworkload:container: xxx ## The name of the container to be collected. If this field is not specified, all containers in the workload Pod will be collected.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).kind: deployment ## Workload type. Valid values: `deployment`, `daemonset`, `statefulset`, `job`, `cronjob`.name: sample-app ## Workload namenamespace: prod ## Workload namespacehostFile: ## Node file path, which is valid only if `type` is `host_file`.filePattern: '*.log' ## Log filename. Wildcards `*` and `?` are supported. `*` indicates to match any number of characters, while `?` indicates to match any single character.logPath: /tmp/logs ## Log folder. Wildcards are not supported.customLabels: ## Custom metadatalabel1: v1
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: defaultallContainers: true...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:allContainers: falseworkloads:- namespace: productionname: ingress-gatewaykind: deployment...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: productionallContainers: falseincludeLabels:k8s-app: nginx...
access.log
file in the /data/nginx/log/
path in the NGINX container in the Pod that belongs to ingress-gateway deployment in the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_filecontainerFile:namespace: productionworkload:name: ingress-gatewaykind: deploymentcontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
access.log
file in the /data/nginx/log/
path in the NGINX container in the Pod whose pod labels contain "k8s-app=ingress-gateway" in the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_filecontainerFile:namespace: productionincludeLabels:k8s-app: ingress-gatewaycontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
.log
files in the host path /data/
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: host_filehostFile:logPath: /datafilePattern: *.log...
LogConfig.yaml
declaration file is defined in Step 2. Define the LogConfig object, you can run the kubectl command to create a LogConfig object based on the file.kubectl create -f /usr/local/LogConfig.yaml
Was this page helpful?