clsDetail:## If the log topic is created automatically, the names of logset and topic need to be specified at the same time.logsetName: test ## CLS logset name. Logset for the name will be created automatically if there is not any. If there is the logset, log topic will be created under it.topicName: test ## CLS log topic name. Log topic for the name will be created automatically if there is not any.# Select an existing logset and log topic. If the logset is specified but the log topic is not, a log topic will be created automatically.logsetId: xxxxxx-xx-xx-xx-xxxxxxxx ## The ID of the CLS logset. The logset needs to be created in advance in CLS.topicId: xxxxxx-xx-xx-xx-xxxxxxxx ## CLS log topic ID. The log topic needs to be created in CLS in advance and should not be occupied by other collection configurations.logType: json_log ## Log collection format. json_log: json format. delimiter_log: separator-based format. minimalist_log: full text in a single line. multiline_log: full text in multi lines. fullregex_log: full regex format. Default value: minimalist_loglogFormat: xxx ## Log formatting methodperiod: 30 ## Lifecycle in days. Value range: 1–3600. `3640` indicates permanent storage.partitionCount: ## The number (an integer) of log topic partitions. Default value: `1`. Maximum value: `10`.tags: ## Tag description list. This parameter is used to bind a tag to a log topic. Up to nine tag key-value pairs are supported, and a resource can be bound to only one tag key.- key: xxx ## Tag keyvalue: xxx ## Tag valueautoSplit: false ## Whether to enable automatic split (Boolean type). Default value: `true`.maxSplitPartitions:storageType: hot ## Log topic storage class. Valid values: `hot` (STANDARD); `cold` (STANDARD_IA). Default value: `hot`.excludePaths: ## Collection path blocklist- type: File ## Type. Valid values: `File`, `Path`.value: /xx/xx/xx/xx.log ## The value of `type`indexs: ## You can customize the indexing method and field when creating a topic.- indexName: ## When a key value or metafield index needs to be configured for a field, the metafield `Key` does not need to be prefixed with `__TAG__.` and is consistent with the one when logs are uploaded. `__TAG__.` will be prefixed automatically for display in the console.indexType: ## Field type. Valid values: `long`, `text`, `double`tokenizer: ## Field delimiter. Each character represents a delimiter. Only English symbols and \\n\\t\\r are supported. For `long` and `double` fields, leave it empty. For `text` fields, we recommend you use @&?|#()='",;:<>[]{}/ \\n\\t\\r\\ as the delimiter.sqlFlag: ## Whether the analysis feature is enabled for the field (Boolean)containZH: ## Whether Chinese characters are contained (Boolean)region: ap-xxx ## Topic region for cross-region shippinguserDefineRule: xxxxxx ## Custom collection rule, which is a serialized JSON stringextractRule: {} ## Extraction and filter rule. If `ExtractRule` is set, `LogType` must be set.
inputDetail:type: container_stdout ## Log collection type, including container_stdout (container standard output), container_file (container file), and host_file (host file)containerStdout: ## Container standard outputnamespace: default ## The Kubernetes namespace of the container to be collected. Separate multiple namespaces by comma, for example, `default,namespace`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `excludeNamespace` is specified.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by comma, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## Filter namespaces by namespace labelallContainers: false ## Whether to collect the standard output of all containers in the specified namespace. Note that if `allContainers=true`, you cannot specify `workload`, `includeLabels`, and `excludeLabels` at the same time.container: xxx ## Name of the container of which the logs will be collected. If the name is empty, it indicates the log names of all matching containers will be collected.excludeLabels: ## Pods with the specified labels will be excluded. This field cannot be specified if `workload`, `namespace`, and `excludeNamespace` are specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If you also specify `includeLabels`, Pods in the intersection will be matched.includeLabels: ## Pods with the specified labels will be collected. This field cannot be specified if `workload`, `namespace`, and `excludeNamespace` are specified.key: value1 ## The `metadata` will be carried in the log collected based on the collection rule and reported to the consumer. Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate multiple values by comma. If you also specify `excludeLabels`, Pods in the intersection will be matched.metadataLabels: ## Specify the Pod labels to be collected as the metadata. If this field is not specified, all Pod labels will be collected as the metadata.- label1customLabels: ## Custom metadatalabel: l1workloads:- container: xxx ## Name of the container to collect. If this parameter is not specified, it indicates all containers in the workload Pod will be collected.kind: deployment ## Workload type. Supported values include deployment, daemonset, statefulset, job, and cronjob.name: sample-app ## Workload namenamespace: prod ## Workload namespacecontainerFile: ## File in the containernamespace: default ## The Kubernetes namespace of the container to be collected. A namespace must be specified.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by comma, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## Filter namespaces by namespace labelcontainer: xxx ## The name of container of which the logs will be collected. The * indicates the log names of all matching containers will be collected.logPath: /var/logs ## Log folder. Wildcards are not supported.filePattern: app_*.log ## Log file name. It supports the wildcards "*" and "?". "*" matches multiple random characters, and "?" matches a single random character.customLabels: ## Custom metadatakey: valueexcludeLabels: ## Pods with the specified labels will be excluded. This field cannot be specified if `workload` is specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If you also specify `includeLabels`, Pods in the intersection will be matched.includeLabels: ## Pods with the specified labels will be collected. This field cannot be specified if `workload` is specified.key: value1 ## The `metadata` will be carried in the log collected based on the collection rule and reported to the consumer. Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate multiple values by comma. If you also specify `excludeLabels`, Pods in the intersection will be matched.metadataLabels: ## Specify the Pod labels to be collected as the metadata. If this field is not specified, all Pod labels will be collected as the metadata.- label1 ## Pod labelworkload:container: xxx ## Name of the container to collect. If this parameter is not specified, it indicates all containers in the workload Pod will be collected.name: sample-app ## Workload name
\\n
as a line break to end a log. For easier structural management, each log is provided with a default key value __CONTENT__
. However, the log data itself is no longer structured, and the log fields are not extracted. The Time
log attribute depends on the time the log is collected. For more information, see Full text in a single line.Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Single-line loglogType: minimalist_log
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
\\n
cannot be used as the end mark of a log. To help the CLS system distinguish among the logs, "First Line Regular Expression" is used for matching. When a log in a line matches the preset regular expression, it is considered to be the beginning of a log, and the next matching line will be the end mark of the log. In this format, a default key value __CONTENT__
is also set. However, the log data itself is no longer structured, and the log fields are not extracted. The Time
log attribute depends on the time the log is collected. For more information, see Full text in multi lines.2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:java.lang.NullPointerExceptionat com.test.logging.FooFactory.createFoo(FooFactory.java:15)at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multi-line loglogType: multiline_logextractRule:# Only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\\n` to the end of the current log.beginningRegex: \\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2},\\d{3}\\s.+
\\_\\_CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\\njava.lang.NullPointerException\\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Full RegexlogType: fullregex_logextractRule:# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: (\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*beginningRegex: (\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys: ['remote_addr','time_local','request_method','request_url','http_protocol','http_host','status','request_length','body_bytes_sent','http_referer','http_user_agent','request_time','upstream_response_time']
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multiple lines - full regexlogType: multiline_fullregex_logextractRule:# The first-line full regular expression: only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\\n` to the end of the current log.beginningRegex: \\[\\d+-\\d+-\\w+:\\d+:\\d+,\\d+\\]\\s\\[\\w+\\]\\s.*# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: \\[(\\d+-\\d+-\\w+:\\d+:\\d+,\\d+)\\]\\s\\[(\\w+)\\]\\s(.*)# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys:- time- level- msg
time: 2018-10-01T10:30:01,000`level: INFO`msg: java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\\n
. For more information, see JSON Format.{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# JSON loglogType: json_log
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
\\n
. You need to define a unique key for each separate field for CLS to process separator-based logs. For more information, see Separator-based Format.10.20.20.10 ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Separator loglogType: delimiter_logextractRule:# Separatordelimiter: ':::'# List of extracted keys, which are in one-to-one correspondence to the separated fieldskeys: ['IP','time','request','host','status','length','bytes','referer']
IP: 10.20.20.10bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: defaultallContainers: true...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:allContainers: falseworkloads:- namespace: productionname: ingress-gatewaykind: deployment...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: productionallContainers: falseincludeLabels:k8s-app: nginx...
access.log
file in the /data/nginx/log/
path in the nginx container in the Pod that belongs to ingress-gateway deployment under the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:topicId: xxxxxx-xx-xx-xx-xxxxxxxxinputDetail:type: container_filecontainerFile:namespace: productionworkload:name: ingress-gatewaytype: deploymentcontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
access.log
file in the /data/nginx/log/
path in the nginx container in the Pod whose pod labels contain “k8s-app=ingress-gateway” under the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_filecontainerFile:namespace: productionincludeLabels:k8s-app: ingress-gatewaycontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
container_stdout
) and container files (container_file
), in addition to the raw log content, the container metadata (for example, the ID of the container that generated the logs) also needs to be carried and reported to CLS. In this way, when viewing logs, users can trace the log source or search based on the container identifier or characteristics (such as container name and labels).Field | Description |
cluster_id | ID of the cluster to which the log belongs |
container_name | Name of the container to which the log belongs |
image_name | Image name IP of the container to which the log belongs |
namespace | Namespace of the Pod to which the log belongs |
pod_uid | UID of the Pod to which the log belongs |
pod_name | Name of the Pod to which the log belongs |
pod_ip | IP of the Pod to which the log belongs |
pod_lable_{label name} | Labels of the Pod to which the log belongs (for example, if a Pod has two labels: app=nginx and env=prod , the reported log will have two metadata entries attached: pod_label_app:nginx and pod_label_env:prod ) |
Apakah halaman ini membantu?