A super node is not an actual node, but a kind of scheduling capability based on the native K8s. It supports scheduling the Pods in a standard Kubernetes cluster to a super node that does not occupy the cluster server resource. In the TKE cluster that has enabled the super node feature, the Pods that meet the scheduling conditions will be scheduled to the super node maintained by Tencent Kubernetes Engine for Serverless.
Pods deployed on the super nodes have the same security isolation as CVM, and have the same network isolation and network connectivity as Pods deployed on existing nodes in the cluster, as shown in the figure below:
If a cluster has deployed a super node, the Pods scheduled to the super node are the elastic containers, which do not occupy the node resources of cluster server, nor is it restricted by the upper limit of server node resources.
To help you efficiently manage nodes in a Kubernetes cluster, TKE introduced the concept of node pool. Basic node pool features allow you to conveniently and quickly create, manage, and terminate nodes and dynamically scale nodes in or out.
With compared to node pool and scaling group, scaling out and in process for super nodes simplifies server purchase, initialization and returning. This improves the speed of elasticity greatly, reduces possible failures in scaling out to the largest extent, and makes elasticity more efficient.
Super nodes have the advantages of second-level elasticity and serverless, on-demand product form, which makes them have a great advantage in terms of costs.
Use on demand to reduce the cluster resource buffer. Because the specifications of the Pods scheduled to real nodes cannot completely match the node specifications, there will always be some fragmented resources that cannot be used but are still billed. Super nodes are used on demand to avoid the generation of fragmented resources, therefore, it can improve the resource utilization of the overall cluster, reduce buffer and save costs.
Reduce the billing duration of elastic resources and save costs. Because the super node is scaling out in seconds and scaling in instantaneously, it will greatly reduce the costs in scaling.
Fees are not charged for super nodes, but charged based on the Pod resources scheduled to the super nodes.
Elastic container on the super node is a pay-as-you-go service. The fees are calculated based on the configured amount of resources and the actual period of using them. Fees will be calculated based on the specifications of the CPU, GPU, and memory for a workload and the running time of the workload. For more information, see Product Pricing.
Generally, a cluster with super node enabled will automatically scale out Pods to a super node when the server node resources are insufficient, and scale in the Pods on the super node first when the server node resources are sufficient. You can also schedule the Pods to a super node manually. For details, please see Notes on Scheduling Pod to Super Node.
For irregular burst traffic, it is difficult to guarantee timely node scaling. If resource specifications are configured with high traffic as a baseline, only a small portion of resources will be used when the traffic is stable, which is a serious waste of resources. It is recommended to configure super nodes without additional preset resources to deal with burst traffic.
For long-term running applications with tidal resource loads, super nodes can quickly deploy a large number of Pods without occupying cluster server node resources. When the Pods need to scale out at the service peak, they will be automatically scheduled to the nodes first, consuming the reserved node resources, and then be scheduled to the super nodes to supply more temporary resources to the cluster. These resources will be automatically returned as the Pods scale in.
For tasks that run in a short term and have high resource demands, it is generally necessary to manually scale out a large number of nodes to ensure resources, then schedule the Pods, and return the server after the task is completed. Node resources have buffer, which causes a waste of resource. It is recommended to use super nodes and schedule Pods to super nodes manually without the need of node management.
Was this page helpful?