This document describes the FAQs of CLB, as well as the causes and solutions of various FAQs of Service/Ingress CLB.
Prerequisites:
You are familiar with K8s concepts, such as Pod, workload, Service, and Ingress. You are familiar with the general operations of TKE Serverless clusters in the TKE console. You know how to use the kubectl command line tool to manage the resources in K8s clusters.
Note:
You can manage the K8s cluster resources in various ways. This document describes how to manage K8s cluster resources in Tencent Cloud console and through the kubectl command line tool.
Which Ingress can TKE Serverless create a CLB instance for?
TKE Serverless will create CLB instances for Ingress that meets the following conditions:
|
Annotations contain the following key-value pair: kubernetes.io/ingress.class: qcloud | If you do not want TKE Serverless to create a CLB instance for Ingress (if, for example, you want to use Nginx-Ingress), you only need to make sure the key-value pair is not contained in the annotations. |
How do I view the CLB instance created by a TKE Serverless cluster for Ingress?
If TKE Serverless has successfully created a CLB instance for Ingress, it will write the VIP of the CLB instance to the status.loadBalancer.ingress
of the Ingress resource, and write the following key-value pair to annotations:
kubernetes.io/ingress.qcloud-loadbalance-id: CLB instance ID
To view the CLB instance created by TKE Serverless for Ingress, perform the steps below:
1. Log in to the TKE console and click Cluster in the left sidebar. 2. On the cluster list page, click the ID of the target cluster to go to the cluster management page.
3. On the cluster management page, choose Service and route > Ingress in the left sidebar.
4. You can find the CLB instance ID and its VIP on the Ingress page. Which Service can TKE Serverless create a CLB instance for?
TKE Serverless will create CLB instances for Service that meets the following conditions:
|
All K8s versions supported by TKE Serverless | spec.type is LoadBalancer .
|
The modified version of K8s (Server GitVersion returned by kubectl version has the "eks." or "tke." suffix) | spec.type is ClusterIP , and the value of spec.clusterIP is not None , indicating a non-Headless ClusterIP Service.
|
The non-modified version of K8s (Server GitVersion returned by kubectl version does not have the "eks." or "tke." suffix) | spec.type is ClusterIP , and spec.clusterIP is specified as an empty string ("").
|
Note:
If the CLB instance is successfully created, TKE Serverless will write the following key-value pair to Service annotations:
service.kubernetes.io/loadbalance-id: CLB instance ID
How do I view the CLB instance created by a TKE Serverless cluster for Service?
If TKE Serverless has successfully created a CLB instance for Service, it will write the VIP of the CLB instance to the status.loadBalancer.ingress
of the Service resource, and write the following key-value pair to annotations:
kubernetes.io/ingress.qcloud-loadbalance-id: CLB instance ID
To view the CLB instance created by TKE Serverless for Service, perform the steps below:
1. Log in to the TKE console and click Cluster in the left sidebar. 2. On the cluster list page, click the ID of the target cluster to go to the cluster management page.
3. On the cluster management page, choose Service and route > Service in the left sidebar.
4. You can find the CLB instance ID and its VIP on the Service page. Why is the ClusterIP of Service invalid (cannot be accessed normally) or why is there no ClusterIP?
For the Service whose spec.type
is LoadBalancer
, TKE Serverless currently does not allocate a ClusterIP by default, or the allocated ClusterIP is invalid (cannot be accessed normally). If you want to use a ClusterIP to access the Service, you can add the following key-value pair to annotations to indicate that TKE Serverless implements a ClusterIP based on a private network CLB instance:
service.kubernetes.io/qcloud-clusterip-loadbalancer-subnetid: Service CIDR block subnet ID
The Service CIDR block subnet ID, which is specified when you create the cluster, is a string in subnet-********
format. You can view the subnet ID on the basic information page of the CLB instance.
Note:
Only TKE Serverless clusters that use the modified version of K8s (Server GitVersion returned by kubectl version has the "eks." or "tke." suffix) support this feature. For the TKE Serverless clusters created earlier that use the non-modified version of K8s (Server GitVersion returned by kubectl version does not have the "eks." or "tke." suffix), you need to upgrade the K8s version to use this feature.
How do I specify the CLB instance type (public or private network)?
You can specify the CLB instance type via TKE console or Kubectl command line tool.
Operations via the TKE console
Operations via the kubectl command line tool
For Ingress, select Public Network or Private Network for Network type to specify the CLB instance type. For Service, set Service Access to specify the CLB instance type. Via VPC means the private network CLB instance. The created CLB instance is of the public network type by default.
If you want to create a CLB instance of the private network type, add the corresponding annotation for the Service or Ingress.
|
| service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: Subnet ID |
| kubernetes.io/ingress.subnetId: Subnet ID |
Note:
The subnet ID is a string in the form of subnet-********
, and the subnet must be in the VPC specified for the Cluster network when creating the cluster. The VPC information can be found in the Basic information of the cluster in TKE console.
How do I specify the existing CLB instance?
You can specify the existing CLB instance via TKE console or Kubectl command line tool.
Operations via the TKE console
Operations via the kubectl command line tool
When creating a Service or Ingress, you can select Use existing to use the existing CLB instance. For Service, you can switch to “Use existing” to use the existing CLB instance through “Update access method” after Service is created.
When creating a Service/Ingress or modifying a Service, you need to add the corresponding annotation for the Service or Ingress.
|
| service.kubernetes.io/tke-existed-lbid: CLB instance ID |
| kubernetes.io/ingress.existLbId: CLB instance ID |
Note:
The existing CLB instance cannot be the CLB instance created by TKE Serverless for Service or Ingress, and TKE Serverless does not support multiple Services/Ingresses to share the same existing CLB instance.
How do I view the access log of a CLB instance?
Only layer-7 CLB instances support configuring access logs, but the access log feature of layer-7 CLB instances created by TKE Serverless for Ingress is disabled by default. You can enable the access log feature for a CLB instance on the details page of the CLB instance, as shown below:
1. Log in to the TKE console and click Cluster in the left sidebar. 2. On the cluster list page, click the ID of the target cluster to go to the cluster management page.
3. On the cluster management page, choose Service and route > Ingress in the left sidebar.
4. On the Ingress page, click the ID of the target CLB instance to go to the basic information page of the CLB instance. 5. In the Access log (layer-7) section of the basic information page of the CLB instance, click to enable the access log feature for the CLB instance. Why does TKE Serverless not create a CLB instance for Ingress or Service?
See Which Ingress can TKE Serverless create a CLB instance for? and Which Service can TKE Serverless create a CLB instance for? to confirm whether the corresponding resources meet the conditions for creating a CLB instance. If the conditions are met but the CLB instance is not successfully created, you can use the kubectl describe
command to view the related events of the resources.
Generally, TKE Serverless will output the related “Warning” events. In the following example, the output event indicates that there are no available IP resources in the subnet, so the CLB instance cannot be successfully created.
How do I use the same CLB in multiple Services?
For TKE Serverless clusters, multiple Services cannot share the same CLB instance by default. If you hope that a Service uses the CLB instance occupied by other Services, add this annotation and specify the value as "true": service.kubernetes.io/qcloud-share-existed-lb: true
. For more information about this annotation, see Annotation. Why do I fail to access CLB VIP?
Please follow the steps below to analyze:
Viewing the CLB instance type
1. On the Ingress page, click the ID of the CLB instance to go to the basic information page of the CLB instance. 2. You can view the Instance type of the above CLB instance on the basic information page of the CLB instance.
Confirming whether the environment for accessing CLB VIP is normal
If the Instance type of the CLB instance is the private network, its VIP can only be accessed in the VPC to which it belongs.
Since the IP of the Pods in the TKE Serverless cluster is the ENI IP in the VPC, you can access the VIP of the CLB instance of any Service or Ingress in the cluster in the Pods.
Note
Most LoadBalancer systems have loopback problems (for example, Troubleshoot Azure Load Balancer). Please do not access the services provided by the workload through the VIP opened by itself (via Service or Ingress) in the Pods to which this workload belongs. That is, Pods should not access the services provided by themselves through the VIP (including "private network" and "public network"). Otherwise, the access delay may increase, or the access will be blocked when there is only one RS/Pod under the rules corresponding to the VIP. If the Instance type of the CLB instance is the public network, its VIP can be accessed in an environment with public network access enabled.
If you want to access the public network VIP in the cluster, please ensure that the public network access has been enabled for the cluster by configuring a NAT gateway or other methods.
Viewing the RS in CLB including (and only including) the IP and port of the expected Pods
On the CLB management page, click the Listener management tab to view the forwarding rules (layer-7 protocol) and the bound backend services (layer-4 protocol). The IP address is expected to be the IP address of each Pod. An example is as follows: Confirming whether the corresponding Endpoints are normal
If you have correctly set the labels for the workload and the Selectors for the Service resource, after the Pods of the workload run successfully, you can find that Pods are added by K8s to the ready IP list of the Endpoints corresponding to the Service by running the kubectl get endpoints
command. The example is as follows:
Pods that are created but in an abnormal state are added by K8s to the unready IP list of the Endpoints corresponding to the Service. The example is as follows:
Note:
You can run the kubectl describe
command to view the cause of the abnormal Pods. The command is as follows:
kubectl describe pod nginx-7c7c647ff7-4b8n5 -n demo
Confirming whether Pods can provide services normally
Even Pods in the Running state may not be able to provide services normally due to some exceptions. For example, the specified protocol + port are not listened to, the internal logic of Pods is incorrect, the process is blocked, etc. You can run the kubectl exec
command to log in to the Pod, and run the telnet/wget/curl
command or use a custom client tool to directly access the Pod IP+ port. If the direct access fails in the Pod, you need to further analyze the reasons why the Pod cannot provide services normally.
Confirming whether the security group bound to the Pod allows the protocol and port of the service provided by the Pod
The security group controls the network access policy of Pods, just like the IPTables rules in the Linux server. Please check based on the actual situation:
Creating a workload via the TKE console
Creating a workload via the kubectl command line tool
The interactive process requires you to specify a security group, and TKE Serverless will use this security group to control the Pods' network access policy. The specified security group will be stored in the spec.template.metadata.annotations
of the workload, and finally added to the annotations of the Pods. An example is as follows:
If you create a workload via the kubectl command line tool and do not specify a security group for Pods (by adding annotations), TKE Serverless will use the default security group of the default project in the same region under the account. The directions are as follows:
1. Log in to the VPC console and click Security Group in the left sidebar. 2. Select Default project of the same region at the top of the Security group page.
3. You can view the default security group in the list, and click Modify rule to view the details. Contact us
Was this page helpful?