Overview
TKE serverless clusters free you from managing cluster nodes. However, to properly allocate resources and accurately calculate fees, you need to specify resource specifications for Pods when deploying a workload. Tencent Cloud allocates computing resources to the workload and calculates the corresponding fees based on the specified specifications.
When you use the Kubernetes API or kubectl to create a workload for a TKE serverless cluster, you can use annotations to specify resource specifications. If annotations are not used, the TKE serverless cluster will calculate the specifications based on the container parameters set for the workload, such as Request and Limit. For more information, see Specifying Resource Specifications. Note
The resource specifications indicate the maximum amount of resources available for containers in a Pod.
The following tables list the supported CPU and GPU specifications. Ensure that allocated resources do not exceed the supported specifications.
The total amount of resources specified by Request for all the containers in a Pod cannot exceed the highest Pod specification.
The amount of resources specified by Limit for any container in a Pod cannot exceed the highest Pod specification.
CPU Specifications
The following table lists CPU specifications that TKE serverless clusters provide for Pods in all regions where CPU resources are supported. TKE serverless clusters also provide a set of CPU options. Different CPU sizes correspond to different memory ranges. Select the CPU specification as needed when creating a workload.
Intel
Star Lake AMD
Based on Tencent Cloud’s self-developed Star Lake servers, EKS provides reliable, secure, and stable high performance. For more information, see Standard SA2. GPU Specifications
The following table lists the GPU specifications that TKE serverless clusters provide for Pods. Different GPU card models and sizes map to different CPU and memory options. Select the GPU specification as needed when creating a workload.
Note
If you want to create, manage, and use GPU workloads using a YAML file, see Annotation Description for reference.
Was this page helpful?