{"kind": "cls.cloud.tencent.com/v1","apiVersion": "LogConfig","metadata": {"name": "configName" // Collection rule name},"spec": {"inputDetail": {...}, // Log collection source definition"clsDetail": {...}, // Configuration for collection to CLS"kafkaDetail": {...} // Configuration for collection to Kafka}}
Field | Required | Description |
inputDetail | Yes | It is used to define the collection source. |
clsDetail | No | CLS collector configuration. The configuration information will be synchronized to the CLS side by the logging component cls-provisioner. |
kafkaDetail | No | Kafka collector configuration. It is used to define the log collection format and other information. |
{"type": "host_file","containerStdout": {},"containerFile": {},"hostFile": {}}
Field | Type | Required | Description |
type | String | Yes | Collection type, with values as follows: container_file: File collection within the container. This type corresponds to the containerFile configuration item.container_stdout: It is used to collect container standard output. host_file: It is used to collect node logs. Super nodes do not support the configuration. |
containerStdout | Object | No | Configuration for collecting container standard output. When setting the configuration for collecting container standard output, you need to specify container_stdout as the collection type. |
containerFile | Object | No | Collection of paths within the container. When setting the configuration for collecting container standard output, you need to specify container_file as the collection type. |
hostFile | Object | No | Collection of node paths. When setting the configuration for collecting container standard output, you need to specify container_file as the collection type. |
{"includeLabels": {"label1": "value1"},"excludeLabels": {"label": "value"},"metadataLabels": ["label1"],"metadataContainer": ["namespace","pod_name","pod_ip","pod_uid","container_id","container_name","image_name","cluster_id"],"container": "container","workloads": [{"container": "kubernetes-proxy","kind": "deployment","name": "kubernetes-proxy","namespace": "default"},{"container": "testlog11","kind": "deployment","name": "testlog1","namespace": "default"}],"customLabels": {"xx": "xx"}}
Field | Type | Required | Description |
includeLabels | Map | No | It is used to collect pods containing the specified label:value. It cannot be configured with workloads simultaneously. |
excludeLabels | Map | No | It is used to exclude pods containing the specified label:value. It cannot be configured with workloads simultaneously. |
metadataLabels | Array | No | It is used to define which labels of the pod are reported as metadata. If it is not specified, all will be reported. |
metadataContainer | Array | No | It is used to define which information of the pod and container is reported as metadata. If it is not specified, all will be reported. |
customLabels | Map | No | User-defined metadata. |
container | String | No | It is used to specify which containers' logs to collect. If it is not specified or * is set, the standard output of all containers that hit the pod will be collected. It cannot be configured with workloads simultaneously. |
workloads | Array | No | It is used to specify which workloads' standard output to collect, along with the names of containers to be collected. |
Field | Type | Required | Description |
container | String | No | It is used to specify the names of containers to be collected. If it is not specified, the standard output of all containers will be collected. |
kind | String | Yes | Workload type. |
name | String | Yes | Workload name. |
namespace | String | Yes | The namespace of the workload. |
{"logPath": "/var/logs","filePattern": "*.log","filePaths": [{"path": "/var/logs","file": "*.log"}],"customLabels": {"key": "value1"},"namespace": "default","nsLabelSelector": "","excludeNamespace": "xxx,xx","includeLabels": {"label": "value"},"excludeLabels": {"exLabel": "exvalue"},"metadataLabels": ["xx"],"metadataContainer": ["namespace","pod_name","pod_ip","pod_uid","container_id","container_name","image_name","cluster_id"],"container": "xxx","workload": {"name": "xxx","kind": "deployment"}}
Field | Type | Required | Description |
logPath | String | No | Collection path. It cannot be configured with filePaths simultaneously. |
filePattern | String | No | Matching pattern for files expected to be collected. It cannot be configured with filePaths simultaneously. Wildcards * and ? are supported. |
filePaths | Array | Yes | It is used to configure the path and file matching pattern within the container expected to be collected. Multiple collection paths can be configured. |
includeLabels | Map | No | It is used to collect pods containing the specified label:value. It cannot be configured with workload simultaneously. |
excludeLabels | Map | No | It is used to exclude pods containing the specified label:value. It cannot be configured with workload simultaneously. |
metadataLabels | Array | No | It is used to define which labels of the pod are reported as metadata. If it is not specified, all will be reported. |
metadataContainer | Array | No | It is used to define which information of the pod and container is reported as metadata. If it is not specified, all will be reported. |
customLabels | Map | No | User-defined metadata. |
container | String | No | It is used to specify which containers' logs to collect. |
workload | Object | No | It is used to specify which workloads' standard output to collect, along with the names of containers to be collected. |
Field | Type | Required | Description |
kind | String | Yes | Workload type. |
name | String | Yes | Workload name. |
Field | Type | Required | Description |
path | String | Yes | The path of logs expected to be collected. |
file | String | Yes | Name matching pattern for log files expected to be collected. Wildcards * and ? are supported. |
{"logPath": "/var/logs","filePattern": "*.log","filePaths": [{"path": "/var/log","file": "*.log"}],"customLabels": {"key": "value"}}
Field | Type | Required | Description |
logPath | String | No | Collection path. It cannot be configured with filePaths simultaneously. |
filePattern | String | No | Matching pattern for files expected to be collected. It cannot be configured with filePaths simultaneously. Wildcards * and ? are supported. |
filePaths | Array | No | It is used to configure the path and file matching pattern within the container expected to be collected. Multiple collection paths can be configured. |
customLabels | Map | No | User-defined metadata. |
{"topicId": "","topicName": "","logsetId": "","logsetName": "","logFormat": "","logType": "","hotPeriod": 7,"period": 30,"partitionCount": 1,"tags": [{"key": "key","value": "value"}],"autoSplit": false,"maxSplitPartitions": 50,"storageType": "hot","extractRule": {...},"excludePaths": [{"excludeType": "File","value": "/var/log/1.log"},{"excludeType": "Path","value": "/var/log1/"}],"fullTextIndex": {...},"indexs": [{...}],"indexStatus": "off","region": "ap-xxx","advancedConfig": {...}}
Field Name | Type | Required | Description |
topicId | String | No | Log topic ID. |
topicName | String | No | Log topic name. It is used to specify the log topic name when topicId is empty. Log topic ID can be automatically pulled according to the log topic name, or a log topic can be automatically created. |
logsetId | String | No | Logset ID. |
logsetName | String | No | Logset name. It is used to specify the logset name when logsetId is empty. Logset ID can be automatically pulled according to the logset name, or a logset can be automatically created. |
region | String | No | The region where the log topic is located. It is the cluster's region by default. |
logType | String | No | Log type. Optional values: json_log (JSON format), delimiter_log (delimiter extraction), minimalist_log (single-line full-text format), multiline_log (multi-line full-text format), fullregex_log (full-regex format). It is minimalist_log by default. |
logFormat | String | No | Log format. |
storageType | String | No | Log topic storage class (it takes effect only when a topic is automatically created). Optional values: hot (STANDARD), cold (STANDARD_IA). It is hot by default. |
hotPeriod | Number | No | Log settlement switch (it takes effect only when a topic is automatically created). 0: Disable log settlement, non-0: Enable log settlement and set the number of days for STANDARD. HotPeriod must be greater than or equal to 7, and less than Period. It takes effect only when StorageType is hot. |
period | Number | No | Log storage period (in days) (it takes effect only when a topic is automatically created). The value range for STANDARD is 13,600 days, and that for STANDARD_IA is 73,600 days. A value of 3,640 means permanent storage. The default value is 30 days. |
partitionCount | Number | No | Number of log topic partitions (it takes effect only when a topic is automatically created). By default, 1 partition is created, with a maximum of 10 partitions. |
tags | Array | No | Tag list (it takes effect only when a topic is automatically created). |
autoSplit | Bool | No | Enable automatic splitting. It is "false" or "true" (it takes effect only when a topic is automatically created). |
maxSplitPartitions | Number | No | After auto splitting is enabled, the maximum number of partitions allowed for each log topic is 50 by default (it takes effect only when a topic is automatically created). |
extractRule | Object | No | Configuration on the collection rule. |
excludePaths | Array | No | Collection blacklist configuration. |
fullTextIndex | Object | No | Full-text index configuration (it takes effect only when a topic is automatically created). |
indexs | Array | No | Index configuration (it takes effect only when a topic is automatically created). |
indexStatus | String | No | Index status (it takes effect only when a topic is automatically created). |
autoIndex | String | No | Whether to enable automatic indexing (it takes effect only when a topic is automatically created). |
advancedConfig | Object | No | Advanced collection configuration. |
{"key": "key","value": "value"}
Field | Type | Required | Description |
key | String | Yes | Tag name. |
value | String | Yes | Tag value. |
{"timeKey": "xxx","timeFormat": "xxx","delimiter": "xxx","logRegex": "xxx","beginningRegex": "xxx","keys": ["xx"],"filterKeys": ["xx"],"filterRegex": ["xx"],"unMatchUpload": "true","unMatchedKey": "parseFailed","backtracking": "-1","isGBK": "true","jsonStandard": "true","advancedFilters": [{"key": "level","rule": 0,"value": "info"}]}
Field Name | Type | Required | Description |
timeKey | String | No | The key name of the time field. timeKey and timeFormat must appear in pairs. |
timeFormat | String | No | The format of the time field. For more information, see the output parameters of the time format description of the strftime function in C language. |
delimiter | String | No | Delimiter for delimited logs, which is valid only if logType is delimiter_log. |
logRegex | String | No | Full-log matching rule, which is valid only if logType is fullregex_log. |
beginningRegex | String | No | First-line matching rule, which is valid only if logType is multiline_log or fullregex_log. |
keys | Array | No | The key name of each extracted field. An empty key indicates discarding the field. This parameter is valid only if logType is delimiter_log. The logs of json_log use the key of JSON itself. |
filterKeys | Array | No | The key of the log to be filtered. It corresponds to FilterRegex by index. |
filterRegex | Array | No | Regex corresponding to the key of the log to be filtered. It corresponds to filterKeys by index. |
unMatchUpload | String | No | Parse whether the failed logs are uploaded. true indicates yes and false indicates no. |
unMatchedKey | String | No | The key of the log that failed to be parsed. |
backtracking | String | No | Data backtracking volume in incremental collection mode. The default value is -1 (full collection), and 0 means incremental collection. |
isGBK | String | No | GBK encoding. "false": No, "true": Yes. Note: This field may return null, indicating that no valid value was obtained. |
jsonStandard | String | No | Standard JSON. "false": No, "true": Yes. Note: This field may return null, indicating that no valid value was obtained. |
advancedFilters | Array | No | Advanced filter rule. This field only applies to collection component versions v1.1.15 and above. For versions below v1.1.15, use filterKeys and filterRegex. |
{"key": "level", // The key to be filtered"rule": 0, // Filter rule: 0 (equal to), 1 (the field exists), 2 (the field does not exist), and 3 (not equal to)"value": "info" // The value to be filtered. The value does not need to be specified when the rule is 1 or 2.}
{"excludeType": "File","value": "/var/log/1.log"}
Field | Type | Required | Description |
excludeType | String | Yes | Exclusion type. Values: File, Path. |
excludeType | String | Yes | Excluded log path or path. |
{"caseSensitive": false,"containZH": false,"status": "on","tokenizer": "@&()='\\",;:<>[]{}\\/ \\n\\t\\r"}
Field Name | Type | Required | Description |
caseSensitive | bool | No | Case-sensitive. |
containZH | bool | No | Include Chinese characters. |
status | String | No | Full-text index switch. If it is not set, full-text index will be enabled by default; if it is set to on and other parameters are not set, then caseSensitive and tokenizer will be the default values; if it is set to off, full-text index will be disabled. |
tokenizer | String | No | Segmentation symbol for full-text index. If the full-text index switch is on, tokenizer must be set. It can be set to "@&()='",;:<>[]{}/ \\n\\t\\r", which is the default setting. |
{"indexName": "xxx","indexType": "text","tokenizer": "@&()='\\",;:<>[]{}/ \\n\\t\\r","sqlFlag": true,"containZH": false}
Field Name | Type | Required | Description |
indexName | String | Yes | Index Name. |
indexType | String | Yes | Index type. Optional values: long, text, and double. |
tokenizer | String | No | Field segmentation symbol. |
sqlFlag | bool | No | Whether to enable the field analysis feature. |
containZH | bool | No | Whether to include Chinese characters. |
{"ClsAgentMaxDepth": 1,"ClsAgentFileTimeout": 60,"ClsAgentParseFailMerge": false,}
Field Name | Type | Required | Description |
ClsAgentMaxDepth | Number | No | The depth of drill-down for the current collection configuration. |
ClsAgentFileTimeout | Number | No | When files are collected under the current configuration, the collector will release the handle of a file if it is not logged within a specified period. |
ClsAgentParseFailMerge | bool | No | Merge and report logs after the collector fails to parse the file. |
{"brokers": "127.0.0.1:9092","topic": "test_log","timestampKey": "@timestamp","timestampFormat": "double","extractRule": {"beginningRegex": "xxxx"},"logType": "minimalist_log","metadata": {"formatType": "json"}}
Field Name | Type | Required | Description |
brokers | String | Yes | Kafka brokers. |
topic | String | Yes | Kafka topic. |
timestampKey | String | No | The key of the timestamp. It is @timestamp by default. |
timestampFormat | String | No | The format of the timestamp. Optional values: double, iso8601. The default value is double. |
extractRule.beginningRegex | String | No | First-line matching rule. It needs to be configured when logType is multiline_log. |
logType | String | No | Log extraction pattern. The default value is single-line full text. Optional values: minimalist_log (single-line full text), multiline_log (multi-line full text), and json (JSON format). |
metadata.formatType | String | No | Metadata format. Optional values: fluent-bit (fluent-bit native format), fluent-bit-string (string type). |
{"apiVersion": "cls.cloud.tencent.com/v1","kind": "LogConfig","metadata": {"name": "test"},"spec": {"inputDetail": {"containerStdout": {"allContainers": true,"namespace": "default"},"type": "container_stdout"},...}}
{"apiVersion": "cls.cloud.tencent.com\\/v1","kind": "LogConfig","spec": {"inputDetail": {"containerStdout": {"allContainers": false,"workloads": [{"kind": "deployment","name": "ingress-gateway","namespace": "production"}]},"type": "container_stdout"},...}}
{"apiVersion": "cls.cloud.tencent.com\\/v1","kind": "LogConfig","spec": {"inputDetail": {"containerStdout": {"allContainers": false,"includeLabels": {"k8s-app": "nginx"},"namespace": "production"},"type": "container_stdout"},...}}
{"apiVersion": "cls.cloud.tencent.com\\/v1","kind": "LogConfig","spec": {"inputDetail": {"containerStdout": {"allContainers": false,"includeLabels": {"k8s-app": "nginx"},"namespace": "production"},"type": "container_stdout"},...}}
{"apiVersion": "cls.cloud.tencent.com/v1","kind": "LogConfig","metadata": {"name": "test"},"spec": {"inputDetail": {"containerFile": {"container": "nginx","filePaths": [{"file": "access.log","path": "/data/nginx/log"}],"namespace": "production","workload": {"kind": "deployment","name": "ingress-gateway"}},"type": "container_file"},...}}
{"apiVersion": "cls.cloud.tencent.com/v1","kind": "LogConfig","spec": {"inputDetail": {"containerFile": {"container": "nginx","filePaths": [{"file": "access.log","path": "/data/nginx/log"}],"includeLabels": {"k8s-app": "ingress-gateway"},"namespace": "production"},"type": "container_file"}}}
{"apiVersion": "cls.cloud.tencent.com/v1","kind": "LogConfig","spec": {"inputDetail": {"containerStdout": {"allContainers": false,"includeLabels": {"k8s-app": "nginx"},"namespace": "production"},"type": "container_stdout"},...}}
{"apiVersion": "cls.cloud.tencent.com\\/v1","kind": "LogConfig","metadata": {"name": "123"},"spec": {"clsDetail": {"extractRule": {"backtracking": "0","isGBK": "false","jsonStandard": "false","unMatchUpload": "false"},"indexs": [{"indexName": "namespace"},{"indexName": "pod_name"},{"indexName": "container_name"}],"logFormat": "default","logType": "minimalist_log","maxSplitPartitions": 0,"region": "ap-chengdu","storageType": "","topicId": "c26b66bd-617e-4923-bea0-test"},"inputDetail": {"containerStdout": {"metadataContainer": ["namespace","pod_name","pod_ip","pod_uid","container_id","container_name","image_name","cluster_id"],"nsLabelSelector": "","workloads": [{"kind": "deployment","name": "testlog1","namespace": "default"}]},"type": "container_stdout"}}}
{"apiVersion": "cls.cloud.tencent.com\\/v1","kind": "LogConfig","metadata": {"name": "321"},"spec": {"inputDetail": {"containerStdout": {"allContainers": true,"namespace": "default","nsLabelSelector": ""},"type": "container_stdout"},"kafkaDetail": {"brokers": "127.0.0.1:9092","extractRule": {},"instanceId": "","kafkaType": "SelfBuildKafka","logType": "minimalist_log","messageKey": {"value": "","valueFrom": {"fieldRef": {"fieldPath": ""}}},"metadata": {},"timestampFormat": "double","timestampKey": "","topic": "test"}}}