[]
are optional.# Scrape task name. `label(job=job_name)` will be added to the corresponding metrics to be scraped at the same timejob_name: <job_name># Scrape task interval[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]# Scrape request timeout period[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]# Scrape task request URI path[ metrics_path: <path> | default = /metrics ]# Solve the conflict between the scraped label and the label added to Prometheus on the backend# true: Retain the scraped label and ignore the label conflicting with Prometheus on the backend# false: Add `exported_<original-label>` before the scraped label to add the label on the Prometheus backend[ honor_labels: <boolean> | default = false ]# Whether to use the time generated on the scrape target# true: Use the time on the target# false: Directly ignore the time on the target[ honor_timestamps: <boolean> | default = true ]# Scrape protocol: HTTP or HTTPS[ scheme: <scheme> | default = http ]# URL parameter of the scrape requestparams:[ <string>: [<string>, ...] ]# Use `basic_auth` to set `Authorization` in the scrape request header. `password` and `password_file` are mutually exclusive, and the value in `password_file` will be used preferablybasic_auth:[ username: <string> ][ password: <secret> ][ password_file: <string> ]# Use `bearer_token` to set `Authorization` in the scrape request header. `bearer_token` and `bearer_token_file` are mutually exclusive, and the value in `bearer_token` will be used preferably[ bearer_token: <secret> ]# Use `bearer_token` to set `Authorization` in the scrape request header. `bearer_token` and `bearer_token_file` are mutually exclusive, and the value in `bearer_token` will be used preferably[ bearer_token_file: <filename> ]# Specify whether the scrape connection passes through a TLS secure channel and configure the corresponding TLS parameterstls_config:[ <tls_config> ]# Use a proxy service to scrape metrics on the target and enter the corresponding proxy service address[ proxy_url: <string> ]# Use static configuration to specify the target. For more information, see the description belowstatic_configs:[ - <static_config> ... ]# Set the CVM scrape configuration. For more information, see the description belowcvm_sd_configs:[ - <cvm_sd_config> ... ]# After scraping the data, change the label on the target through the relabeling mechanism and run multiple relabeling rules in sequence# For more information on `relabel_config`, see the description belowrelabel_configs:[ - <relabel_config> ... ]# After the data is scraped and before it is written, use the relabeling mechanism to change the label value and run multiple relabeling rules in sequence# For more information on `relabel_config`, see the description belowmetric_relabel_configs:[ - <relabel_config> ... ]# Limit of data points in one scrape. 0: no limit. Default value: 0[ sample_limit: <int> | default = 0 ]# Limit of targets in one scrape. 0: no limit. Default value: 0[ target_limit: <int> | default = 0 ]
static_config
configuration# Specify the corresponding target host value, such as `ip:port`targets:[ - '<host>' ]# Add the corresponding label to all targets, which is similar to a global labellabels:[ <labelname>: <labelvalue> ... ]
cvm_sd_config
configurationLabel | Description |
__meta_cvm_instance_id | Instance ID |
__meta_cvm_instance_name | Instance name |
__meta_cvm_instance_state | Instance status |
__meta_cvm_instance_type | Instance model |
__meta_cvm_OS | Instance OS |
__meta_cvm_private_ip | Private IP |
__meta_cvm_public_ip | Public IP |
__meta_cvm_vpc_id | VPC ID |
__meta_cvm_subnet_id | Subnet ID |
__meta_cvm_tag_<tagkey> | Instance tag value |
__meta_cvm_region | Instance region |
__meta_cvm_zone | Instance AZ |
# Tencent Cloud region. For the region list, visit https://cloud.tencent.com/document/api/213/15692#.E5.9C.B0.E5.9F.9F.E5.88.97.E8.A1.A8.region: <string># Custom endpoint.[ endpoint: <string> ]# Credential information for accessing TencentCloud API. If it is not set, the values of the `TENCENT_CLOUD_SECRET_ID` and `TENCENT_CLOUD_SECRET_KEY` environment variables will be used.# Leave it empty if you use a CVM scrape task in **Integration Center** for configuration.[ secret_id: <string> ][ secret_key: <secret> ]# CVM list refresh interval[ refresh_interval: <duration> | default = 60s ]# Port for scraping metricsports:- [ <int> | default = 80 ]# CVM list filtering rule. For more information on the supported filtering rules, visit https://www.tencentcloud.com/document/product/213/33258.filters:[ - name: <string>values: <string>, [...] ]
cvm_sd_configs
, the integration automatically uses the preset role authorization of the service for security considerations. You don't need to manually enter the secret_id
, secret_key
, and endpoint
parameters.job_name: prometheusscrape_interval: 30sstatic_configs:- targets:- 127.0.0.1:9090
job_name: demo-monitorcvm_sd_configs:- region: ap-guangzhouports:- 8080filters:- name: tag:servicevalues:- demorelabel_configs:- source_labels: [__meta_cvm_instance_state]regex: RUNNINGaction: keep- regex: __meta_cvm_tag_(.*)replacement: $1action: labelmap- source_labels: [__meta_cvm_region]target_label: regionaction: replace
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding K8s resource type, which is PodMonitor herekind: PodMonitor# Corresponding K8s metadata. Here, only the `name` is concerned. If `jobLabel` is not specified, the value of job in the corresponding metric label will be `<namespace>/<name>`metadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is fixed. Do not change it# Describe the selection of the scrape target Pod and the configuration of the scrape taskspec:# Enter the target Pod label. PodMonitor will use the corresponding value as the job label value# If Pod YAML configuration is to be viewed, use the value in `pod.metadata.labels`# If `Deployment/Daemonset/Statefulset` is to be viewed, use `spec.template.metadata.labels`[ jobLabel: string ]# Add the label on the corresponding Pod to the target label[ podTargetLabels: []string ]# Limit of data points in one scrape. 0: no limit. Default value: 0[ sampleLimit: uint64 ]# Limit of targets in one scrape. 0: no limit. Default value: 0[ targetLimit: uint64 ]# Configure the Prometheus HTTP port to be exposed and scraped. You can configure multiple EndpointspodMetricsEndpoints:[ - <endpoint_config> ... ] # For more information, see the endpoint description below# Select the namespace where the Pod to be monitored resides. If it is not specified, all namespaces will be selected[ namespaceSelector: ]# Whether to select all namespaces[ any: bool ]# List of namespace to be selected[ matchNames: []string ]# Enter the label of the Pod to be monitored to locate the target Pod. For more information, see [LabelSelector v1 meta](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)selector:[ matchExpressions: array ][ example: - {key: tier, operator: In, values: [cache]} ][ matchLabels: object ][ example: k8s-app: redis-exporter ]
apiVersion: monitoring.coreos.com/v1kind: PodMonitormetadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is fixed. Do not change itspec:podMetricsEndpoints:- interval: 30sport: metric-port # Enter the name of the corresponding port of the Prometheus exporter in the Pod YAML configuration filepath: /metrics # Enter the value of the corresponding path of the Prometheus exporter. If it is not specified, it will be `/metrics` by defaultrelabelings:- action: replacesourceLabels:- instanceregex: (.*)targetLabel: instancereplacement: 'crs-xxxxxx' # Change it to the corresponding Redis instance ID- action: replacesourceLabels:- instanceregex: (.*)targetLabel: ipreplacement: '1.x.x.x' # Change it to the corresponding Redis instance IPnamespaceSelector: # Select the namespace where the Pod to be monitored residesmatchNames:- redis-testselector: # Enter the label value of the Pod to be monitored to locate the target PodmatchLabels:k8s-app: redis-exporter
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding K8s resource type, which is ServiceMonitor herekind: ServiceMonitor# Corresponding K8s metadata. Here, only the `name` is concerned. If `jobLabel` is not specified, the value of job in the corresponding metric label will be the Service namemetadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is fixed. Do not change it# Describe the selection of the scrape target Pod and the configuration of the scrape taskspec:# Enter the target Pod label (metadata/labels). ServiceMonitor will use the corresponding value as the job label value[ jobLabel: string ]# Add the label on the corresponding Service to the target label[ targetLabels: []string ]# Add the label on the corresponding Pod to the target label[ podTargetLabels: []string ]# Limit of data points in one scrape. 0: no limit. Default value: 0[ sampleLimit: uint64 ]# Limit of targets in one scrape. 0: no limit. Default value: 0[ targetLimit: uint64 ]# Configure the Prometheus HTTP port to be exposed and scraped. You can configure multiple Endpointsendpoints:[ - <endpoint_config> ... ] # For more information, see the endpoint description below# Select the namespace where the Pod to be monitored resides. If it is not specified, all namespaces will be selected[ namespaceSelector: ]# Whether to select all namespaces[ any: bool ]# List of namespace to be selected[ matchNames: []string ]# Enter the label of the Pod to be monitored to locate the target Pod. For more information, see [LabelSelector v1 meta](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)selector:[ matchExpressions: array ][ example: - {key: tier, operator: In, values: [cache]} ][ matchLabels: object ][ example: k8s-app: redis-exporter ]
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata:name: go-demo # Enter a unique namenamespace: cm-prometheus # The namespace is fixed. Do not change itspec:endpoints:- interval: 30s# Enter the name of the corresponding port of the Prometheus exporter in the Service YAML configuration fileport: 8080-8080-tcp# Enter the value of the corresponding path of the Prometheus exporter. If it is not specified, it will be `/metrics` by defaultpath: /metricsrelabelings:# ** There must be a label named `application`. Here, suppose that K8s has a label named `app`# Use the `replace` action of `relabel` to replace it with `application`- action: replacesourceLabels: [__meta_kubernetes_pod_label_app]targetLabel: application# Select the namespace where the Service to be monitored residesnamespaceSelector:matchNames:- golang-demo# Enter the label value of the Service to be monitored to locate the target Serviceselector:matchLabels:app: golang-app-demo
# Corresponding port name. Note that it is not the port number here. Default value: 80. Corresponding values are as follows:# ServiceMonitor: `Service>spec/ports/name`# PodMonitor description:# If Pod YAML configuration is to be viewed, use the value in `pod.spec.containers.ports.name`# If `Deployment/Daemonset/Statefulset` is to be viewed, use `spec.template.spec.containers.ports.name`[ port: string | default = 80]# Scrape task request URI path[ path: string | default = /metrics ]# Scrape protocol: HTTP or HTTPS[ scheme: string | default = http]# URL parameter of the scrape request[ params: map[string][]string]# Scrape task interval[ interval: string | default = 30s ]# Scrape task timeout period[ scrapeTimeout: string | default = 30s]# Specify whether the scrape connection passes through a TLS secure channel and configure the corresponding TLS parameters[ tlsConfig: TLSConfig ]# Read the value of the bearer token through the corresponding file and add it to the header of the scrape task[ bearerTokenFile: string ]# You can use the corresponding K8s secret key to read the bearer token. Note that the secret namespace must be the same with that of the PodMonitor/ServiceMonitor[ bearerTokenSecret: string ]# Solve the conflict between the scraped label and the label added to Prometheus on the backend# true: Retain the scraped label and ignore the label conflicting with Prometheus on the backend# false: Add `exported_<original-label>` before the scraped label to add the label on the Prometheus backend[ honorLabels: bool | default = false ]# Whether to use the time generated on the scrape target# true: Use the time on the target# false: Directly ignore the time on the target[ honorTimestamps: bool | default = true ]# `basic auth` authentication information. Enter the corresponding K8s secret key value for `username/password`. Note that the secret namespace must be the same as that of the PodMonitor/ServiceMonitor[ basicAuth: BasicAuth ]# Use a proxy service to scrape metrics on the target and enter the corresponding proxy service address[ proxyUrl: string ]# After scraping the data, change the label on the target through the relabeling mechanism and run multiple relabeling rules in sequence# For more information on `relabel_config`, see the description belowrelabelings:[ - <relabel_config> ...]# After the data is scraped and before it is written, use the relabeling mechanism to change the label value and run multiple relabeling rules in sequence# For more information on `relabel_config`, see the description belowmetricRelabelings:[ - <relabel_config> ...]
# Specify which labels are to be taken from the original labels for relabeling. The taken values are concatenated and separated with the symbol defined in `separator`# The corresponding configuration item for PodMonitor/ServiceMonitor is `sourceLabels`[ source_labels: '[' <labelname> [, ...] ']' ]# Define the separator symbol for concatenating the labels to be relabeled. Default value: `;`[ separator: <string> | default = ; ]# If `action` is ` replace` or `hashmod`, you need to use the `target_label` to specify the corresponding label name# The corresponding configuration item for PodMonitor/ServiceMonitor is `targetLabel`[ target_label: <labelname> ]# Regex for regular match of the values of source labels[ regex: <regex> | default = (.*) ]# Calculate the modulus of the MD5 value of the source label. The modulo operation is used if `action` is `hashmod`[ modulus: <int> ]# If `action` is `replace`, use `replacement` to define the expression to be replaced after regular match. You can replace it based on regex[ replacement: <string> | default = $1 ]# Perform an action based on the value matched by the regex. Valid values of `action` are as follows (the default value is `replace`):# replace: Replace the matched value with that defined in `replacement` if the regex has any match and use `target_label` to set the value and add the corresponding label# keep: Drop the value if the regex has no matches# drop: Drop the value if the regex has any match# hashmod: Calculate the modulus of the MD5 value of the source label based on the value specified by `modulus` and add a label with the name specified by `target_label`# labelmap: Use `replacement` to replace the corresponding label name if the regex has any match# labeldrop: Delete the corresponding label name if the regex has any match# labelkeep: Delete the corresponding label name if the regex has no matches[ action: <relabel_action> | default = replace ]
Was this page helpful?