Field Name | Description |
Specification amplification | Virtually amplifies the native node specifications to schedule more Pods, allowing the current node packing rate to exceed 100%. Note: The current node specification amplification only virtually amplifies the current Native Node specifications. This feature carries certain risks: if too many Pods are scheduled on a node and the total actual resource usage of Pods exceeds the node specifications, it will trigger Pod evicting and rescheduling. |
Static amplification factor - CPU | Supports the entering of numbers between 1 and 3, keeping one decimal place. |
Static amplification factor - Memory | Supports the entering of numbers between 1 and 2, keeping one decimal place. |
Scheduling threshold control | Sets the target resource utilization for the current native node to ensure stability. When scheduling Pods, nodes with a utilization above this threshold will not be selected. The scheduling threshold is also a default for: 1. The evicting stop threshold: When the node utilization reaches the runtime threshold and evicting occurs, it will be evicted to the target threshold.
For example: If the runtime threshold is 80 and the scheduling threshold is 60, start evicting after the node utilization reaches 80. Pods are evicted in sequence based on the utilization of Pods that can be evicted until the threshold is below 60. 2. Low-load node threshold: When the node utilization reaches the runtime threshold and evicting occurs, it needs to be checked whether the utilization of At Least Three Native Nodes is lower than the Scheduling Threshold of these nodes.
For example: If there are 5 native nodes, each with a runtime threshold of 80 and a scheduling threshold of 60, then when the node utilization reaches 80, it starts to check for evicting. To find the low-load native nodes that can accommodate Pods, at least three native nodes shall have the utilization lower than 60. Note: If different native nodes have different scheduling thresholds, it is necessary to check the utilization and scheduling threshold of the specific native node. |
Target CPU utilization during scheduling | Supports the entering of integers between 0 and 100. |
Target memory utilization during scheduling | Supports the entering of integers between 0 and 100. |
Runtime threshold control | Sets the target resource utilization for the current native node to ensure stability. When the node is running, nodes with the utilization above this threshold may trigger evicting, provided that the tag indicating the evicting status is specified on the designated workload. Note: 1. Business Pods are not evicted by default. To avoid evicting critical Pods, this feature does not evict Pods by default. For Pods that can be evicted, users need to indicate on the workloads of such Pods, then the component will perform evicting when the threshold is triggered. For example, set the evictable annotation of: descheduler.alpha.kubernetes.io/evictable: 'true' for the objects of statefulset and deployment.2. The eviction can occur only when there are enough low-load nodes. To ensure that the Pods have a node to accommodate after the eviction, the scheduler requires the utilization of At Least Three Native Nodes to be below the Scheduling Threshold of these nodes. Therefore, if there are many native nodes in the cluster, but fewer than three native nodes have the threshold enabled, or fewer than three native nodes are below the Scheduling threshold, the eviction cannot be performed. |
Runtime target CPU utilization | Supports the entering of integers between 0 and 100. Note: The runtime target CPU utilization shall be greater than the scheduling target CPU utilization. |
Runtime target memory utilization | Supports the entering of integers between 0 and 100. Note: The runtime target memory utilization shall be greater than the scheduling target memory utilization. |
Eviction-stop threshold control | Starting from v1.4.0 and later for the native node scheduler, the eviction-stop threshold can be visually configured via the console. This means that, when a node is under a high load, it will continuously evict the Pods until the eviction-stop threshold is reached. |
Eviction-stop target CPU utilization | Supports the entering of integers between 0 and 100. Note: The runtime target CPU utilization shall be greater than the scheduling target CPU utilization. |
Eviction-stop target memory utilization | Supports the entering of integers between 0 and 100. Note: The runtime target CPU utilization shall be greater than the scheduling target CPU utilization. |
expansion.scheduling.crane.io/cpu
and expansion.scheduling.crane.io/memory
related to the amplification factors on the native nodes. An example is as follows:kubectl describe node 10.8.22.108...Annotations: expansion.scheduling.crane.io/cpu: 1.5 # CPU amplification factorexpansion.scheduling.crane.io/memory: 1.2 # Memory amplification factor...Allocatable:cpu: 1930m # Original schedulable resource amount of the nodeephemeral-storage: 47498714648hugepages-1Gi: 0hugepages-2Mi: 0memory: 1333120Kipods: 253...Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 960m (49%) 8100m (419%) # Occupancy of Request and Limit for this nodememory 644465536 (47%) 7791050368 (570%)ephemeral-storage 0 (0%) 0 (0%)hugepages-1Gi 0 (0%) 0 (0%)hugepages-2Mi 0 (0%) 0 (0%)...
apiVersion: apps/v1kind: Deploymentmetadata:namespace: defaultname: test-schedulerlabels:app: nginxspec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:nodeSelector: # Specify node schedulingkubernetes.io/hostname: 10.8.20.108 # Specify the use of native nodes in the examplecontainers:- name: nginximage: nginx:1.14.2resources:requests:cpu: 1500m # The application node is greater than the schedulable amount before amplification but less than the schedulable amount after amplification.ports:- containerPort: 80
kubectl get deploymentNAME READY UP-TO-DATE AVAILABLE AGEtest-scheduler 1/1 1 1 2m32s
kubectl describe node 10.8.22.108...Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 2460m (127%) 8100m (419%) # Occupancy of Request and Limit for this node. It can be seen that, the total Request exceeds the original schedulable amount. The node specification is successfully amplified.memory 644465536 (47%) 7791050368 (570%)ephemeral-storage 0 (0%) 0 (0%)hugepages-1Gi 0 (0%) 0 (0%)hugepages-2Mi 0 (0%) 0 (0%)
apiVersion: scheduling.crane.io/v1alpha1kind: ClusterNodeResourcePolicymetadata:name: housekeeper-policy-np-88888888-55555 # name cannot be duplicated.spec:applyMode: annotation # Default value. Other values are not supported for now.nodeSelector: # It is used to select a group of nodes with the same configuration. For example, below is used to select a native node with the ID of np-88888888-55555. You can also use a common tag for a batch of nodes to achieve batch selection of nodes.matchLabels:cloud.tencent.com/node-instance-id: np-88888888-55555template:spec:# evictLoadThreshold is the runtime threshold control of native nodes. It is used for setting the eviction resource utilization for the current batch of native nodes to ensure stability. When the Pods run on native nodes, the native nodes above this threshold may be evicted. To avoid evicting critical Pods, this feature does not evict Pods by default. For Pods that can be evicted, users need to explicitly judge the workload of the Pod. For example, set the evictable annotation of: descheduler.alpha.kubernetes.io/evictable: 'true' for the objects of statefulset and deployment.# evictLoadThreshold shall be greater than targetLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.evictLoadThreshold:percents:cpu: 80memory: 80resourceExpansionStrategyType: static # default value. Other values are not supported for now.staticResourceExpansion:ratios: # Node specification amplification factor. It is used for setting the amplification factor for the current batch of native nodes.cpu: "3" # The amplification factor of the node CPU. It is recommended not to set it too high, otherwise it may cause stability risks. The maximum value is limited to 3 in the console.memory: "2" # The amplification factor of the node memory. It is recommended not to set it too high, otherwise it may cause stability risks. The maximum value is limited to 2 in the console.# targetLoadThreshold is the scheduling threshold control of native nodes. It is used for setting the amplification factor for the current batch of native nodes to ensure stability. Native nodes above this threshold will not be selected during Pod scheduling.# targetLoadThreshold shall be less than evictLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.targetLoadThreshold:percents:cpu: 70memory: 70
apiVersion: scheduling.crane.io/v1alpha1kind: ClusterNodeResourcePolicymetadata:name: housekeeper-policy-np-88888888-55555 # name cannot be duplicated.spec:applyMode: annotation # Default value. Other values are not supported for now.nodeSelector: # It is used to select a group of nodes with the same configuration. For example, below is used to select a native node with the ID of np-88888888-55555. You can also use a common tag for a batch of nodes to achieve batch selection of nodes.matchLabels:cloud.tencent.com/node-instance-id: np-88888888-55555template:spec:resourceExpansionStrategyType: auto # Enable node dynamic amplificationautoResourceExpansion:crontab: 0 * * * 0-6decayLife: 168h# Setting Specific Parameters for Node Dynamic AmplificationmaxRatios:cpu: "2"memory: "1.5"minRatios:cpu: "1.0"memory: "1.0"targetLoadThreshold:percents:cpu: 50memory: 70# evictLoadThreshold is the runtime threshold control of native nodes. It is used for setting the eviction resource utilization for the current batch of native nodes to ensure stability. When Pods run on native nodes, native nodes above this threshold may be evicted. To avoid evicting critical Pods, this feature does not evict Pods by default. For Pods that can be evicted, users need to explicitly judge the workload of the Pod. For example, set the evictable annotation of: descheduler.alpha.kubernetes.io/evictable: 'true' for the objects of statefulset and deployment.# evictLoadThreshold shall be greater than targetLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.evictLoadThreshold:percents:cpu: 80memory: 80# targetLoadThreshold is the scheduling threshold control of native nodes. It is used for setting the current target resource utilization of these native nodes to ensure stability. Native nodes above this threshold will not be selected during Pod scheduling.# targetLoadThreshold shall be less than evictLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.targetLoadThreshold:percents:cpu: 70memory: 70
apiVersion: scheduling.crane.io/v1alpha1kind: ClusterNodeResourcePolicymetadata:name: housekeeper-policy-np-88888888-55555 # name cannot be duplicated.spec:applyMode: annotation # Default value. Other values are not supported for now.nodeSelector: # It is used to select a group of nodes with the same configuration. For example, below is used to select a native node with the ID of np-88888888-55555. You can also use a common tag for a batch of nodes to achieve batch selection of nodes.matchLabels:cloud.tencent.com/node-instance-id: np-88888888-55555template:spec:# evictLoadThreshold is the runtime threshold control of native nodes. It is used for setting the eviction resource utilization for the current batch of native nodes to ensure stability. When Pods run on native nodes, native nodes above this threshold may be evicted. To avoid evicting critical Pods, this feature does not evict Pods by default. For Pods that can be evicted, users need to explicitly judge the workload of the Pod. For example, set the evictable annotation of: descheduler.alpha.kubernetes.io/evictable: 'true' for the objects of statefulset and deployment.# evictLoadThreshold shall be greater than targetLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.evictLoadThreshold:percents:cpu: 80memory: 80#################### Eviction Stop Watermark ### The value shall be lower than the runtime threshold of evictLoadThreshold.evictTargetLoadThreshold:percents:cpu: 75memory: 75resourceExpansionStrategyType: static # default value. Other values are not supported for now.staticResourceExpansion:ratios: # Node specification amplification factor. It is used for setting the amplification factor for the current batch of native nodes.cpu: "3" # The amplification factor of the node CPU. It is recommended not to set it too high, otherwise it may cause stability risks. The maximum value is limited to 3 in the console.memory: "2" # The amplification factor of the node memory. It is recommended not to set it too high, otherwise it may cause stability risks. The maximum value is limited to 2 in the console.# targetLoadThreshold is the scheduling threshold control of native nodes. It is used for setting the current target resource utilization of these native nodes to ensure stability. Native nodes above this threshold will not be selected during Pod scheduling.# targetLoadThreshold shall be less than evictLoadThreshold, otherwise the node may be continuously scheduled with new Pods after eviction to cause jitter.targetLoadThreshold:percents:cpu: 70memory: 70
Was this page helpful?