apiVersion: node.tke.cloud.tencent.com/v1beta1kind: MachineSetspec:type: NativedisplayName: mstestreplicas: 2autoRepair: truedeletePolicy: RandomhealthCheckPolicyName: test-allinstanceTypes:- C3.LARGE8subnetIDs:- subnet-xxxxxxxx- subnet-yyyyyyyyscaling:createPolicy: ZonePrioritymaxReplicas: 100template:spec:displayName: mtestruntimeRootDir: /var/lib/containerdunschedulable: falsemetadata:labels:key1: "val1"key2: "val2"providerSpec:type: Nativevalue:instanceChargeType: PostpaidByHourlifecycle:preInit: "echo hello"postInit: "echo world"management:hosts:- Hostnames:- testIP: 22.22.22.22nameservers:- 183.60.83.19- 183.60.82.98- 8.8.8.8metadata:creationTimestamp: nullsecurityGroupIDs:- sg-xxxxxxxxsystemDisk:diskSize: 50diskType: CloudPremium
Parameter Module | Parameter | YAML Field | Note |
Launch Configuration | Node Pool Type | Field name: spec.type Field value: Native | Native represents the native node pool. |
| Node Pool Name | Field name: spec.displayname Field value: demo-machineset (custom) | Customizable. You can name it based on business needs and other information, facilitating subsequent resource management. |
| Billing Mode | Field name: spec.template.spec.providerSpec.value.instanceChargeType Field value: PostpaidByHour (pay-as-you-go)/PrepaidCharge (monthly subscription) | Both pay-as-you-go and monthly subscription are supported. Select the value according to your actual needs. |
| Model Configuration | Model: Field name: spec.instanceTypes Field value: S2.MEDIUM4 (Refer to the console for other model specifications.) System Disk: Field name: spec.template.spec.providerSpec.value.systemDisk.diskSize/diskType Field value: diskSize: 50 (Customizable. The size must be a multiple of 10 and the minimum value is 50 GB.)
diskType: CloudPremium/CloudSSD (System disk type. The options include Premium Cloud Disk and SSD.) | Select the value as needed by referring to the following information in the "Model Configuration" window: Availability Zone: The instance types available under the selected availability zone are filtered out. Model: The models are filtered by CPU core count, memory size, or instance type. System disk: This disk saves the systems running on storage control and scheduling nodes. Set the size of a new system disk to be greater than 100 GB. |
| Data disk | Field name: spec.template.spec.providerSpec.value.dataDisks
Field value: diskSize: same as system disk diskType: same as system disk fileSystem: ext3/ext4/xfs mountTarget: /var/lib/containerd (mount path) | This disk saves business data. It is recommended to format and mount the disk. |
| Public network bandwidth | Field name: spec.template.spec.providerSpec.value.internetAccessible
Field value: For details, see Enabling Public Network Access for Native Nodes | Public network bandwidth: To enable public network access, you need to bind an EIP. For details, see
Enabling Public Network Access for Native Nodes. |
| hostname | Display field: metadata.annotation key: "node.tke.cloud.tencent.com/hostname-pattern" value: "custom" | Computer name inside the operating system. By default, the intranet IP address is used. The node's hostname will be consistent with the host name. Naming rules: 1. Batch continuous naming and specified pattern string naming are supported. The name must contain 2 to 40 characters. The following characters are supported: lowercase letters, digits, hyphens (-), and dots (.), Symbols must not be used at the beginning or end, and must not be used consecutively. 2. The format of hostname is: Custom hostname + ".Node pool ID" + ".{R:k}", where k indicates the number of instances already created in this activity. For example, if the custom hostname is work{R:2}, then the hostnames to be added in batches will be work.Node pool ID.0, work.Node pool ID.1, and so on. Note: The field is displayed during the creation of the node pool only when the cluster's node hostname naming mode is set to manual mode. |
| SSH Key | Field name: spec.template.spec.providerSpec.value.keyIDs Field value: skey-asxxxx (SSH key ID) | The node login mode is SSH. If the existing key is not suitable, create a new one. |
| Security Group | Field name: spec.template.spec.providerSpec.value.securityGroupIDs Field value: sg-a7msxxx (security group ID) | By default, the security group set during creation of the cluster is used. If the existing security group is not suitable, create a new one. |
| Quantity | Field name: spec.replicas Field value: 7 (custom) | Expected number of nodes maintained within the corresponding node pool. Set it according to your actual needs. For example, if it is set to 5, then five nodes will be created and maintained in the node pool. |
| Container Network | Field name: spec.subnetIDs Field value: subnet-i2ghxxxx (container subnet ID) | Select an appropriate available subnet according to your actual needs. 1. When you manually adjust the node count, the system will try to create nodes according to the order of the subnet list. If nodes of a subnet at the front of the order can be successfully created, then nodes will always be created in this subnet. 2. If auto scaling is enabled for the node pool, nodes will be created in an appropriate subnet selected based on the scaling policy you have configured. |
OPS feature | Fault Self-Healing | Field name: spec.autoRepair Field value: true (enabled)/false (disabled) | Optional. It is recommended to enable this feature. The feature can detect various anomalies on native nodes in real-time and provide certain self-healing measures, including: OS, Runtime, kubelet abnormalities, etc. |
| Check and Self-healing Rules | Field name: spec.healthCheckPolicyName Field value: test-all (binding fault self-healing CR name) | You can select different fault self-healing rules for node pools. Each node pool, however, can be bound with only one rule. |
| Auto Scaling | Field name: spec.scaling | After auto scaling is enabled for the node pool, the CA component will automatically perform scaling for this node pool. Remarks: The auto scaling feature for native nodes is developed in-house by the container platform, while the feature for regular nodes relies on the auto scaling feature of the cloud product. |
| Node Quantity Range | Field name: spec.scaling.maxReplicas/minReplicas Field value: maxReplicas: 7 (custom) minReplicas: 2 (custom) | The number of nodes within the node pool is limited to this range's minimum/maximum value. If auto scaling is enabled for the node pool, the quantity of native nodes will be automatically adjusted within the set range. |
| Scaling Policy | Field name: spec.scaling.createPolicy Field value: ZonePriority (preferred availability zone first)/ ZoneEquality (distribute among multiple availability zones) | 1. Preferred availability zone first: Auto scaling will prioritize scale-in/scale-out in your preferred availability zone. If scale-in/scale-out is not possible in the preferred zone, it will be then performed in other availability zones. 2. Distribute among multiple availability zones: Efforts are made to evenly distribute node instances among the specified multiple availability zones (i.e., multiple subnets) in the scaling group. This policy is only effective when multiple subnets are configured. |
Advanced Parameters | Labels | Field name: spec.template.spec.metadata.labels Field value: key1: "value1" (The label's key/value is customizable.) | Node attributes that facilitate filtering and management of nodes. The configured labels will be automatically added for the nodes created under this node pool. |
| Taints | Field name: spec.template.spec.metadata.taints Field value: effect: NoSchedule/PreferNoSchedule/NoExecute (Fill in the type of taints.) | Node attributes that are typically used in conjunction with Tolerations, ensuring that pods not meeting the conditions cannot be scheduled to this node. The configured taints will be automatically added for the nodes created under this node pool. |
| Container Directory | Field name: spec.template.spec.runtimeRootDir Field value: /var/lib/containerd | Check to set the container and image storage directory, for example, /var/lib/. |
| Management | Field name: spec.template.spec.providerSpec.value.management. kubeletArgs/kernelArgs/hosts/nameservers Field value: | The settings of the Kubelet, Kernel, Hosts, and Nameservers parameters are customizable. For details, see
Management Parameter Description. |
| Custom Script | Field name: spec.template.spec.providerSpec.value.lifecycle.preInit/postInit Field value: preInit: "echo hello" (Script executed before node initialization, which is customizable.) postInit: "echo world" (Script executed after node initialization, which is customizable.) | Specify custom data to configure the node. Two scripts used before node initialization and after node initialization are provided. Ensure the scripts' reentrancy and retry logic. The scripts and their generated log files can be viewed in the node path /usr/local/qcloud/tke/PreHook(PostInit). |
Was this page helpful?