Reason: CPU cores and memory assigned to a node are calculated based on the CPU and memory requests in each Pod on the node, but the requests of failed Pods are not subtracted in the calculation.
Example. A node's specification is 4-core 8 GB MEM, and three Pods are running on it. Below is the resource request usage:
The idle resources on the node are 4 - 2 - 1 = 1 CPU core and 8 - 4 - 2 = 2 GB memory. Pod 4's requests are 0.8 CPU core and 1.5 GB memory, which meets the scheduler's requirements and is scheduled to the node normally. At this point, the node has four Pods, three normal ones and one failed one, and is assigned 4.3 CPU cores and 8.5 GB memory (as the requests of the failed Pod are not subtracted in the calculation, the node specification is exceeded).
This problem was fixed in the new version in May, that is, requests of failed Pods are subtracted during the calculation of the node resources to be assigned.
k8s_workload_abnormal
monitoring metric is abnormal?Reason: The metric status is subject to whether Pods of the workload are normal, and whether a Pod is normal is subject to the four types in pod.status.condition
. k8s_workload_abnormal
will be considered normal only when the four metrics are all True
at the same time; otherwise, it will be considered abnormal.
tke-monitor-agent
DaemonSet errorsError | Cause | Solution |
---|---|---|
The domain name `receiver.barad.tencentyun.com` failed to be resolved, and the metric failed to be reported, so the cluster didn't have the monitoring data. | The node DNS was modified. | Add `hostAlias` to the `tke-monitor-agent` DaemonSet as follows:
|
Was this page helpful?