Nginx Ingress provides robust features and extremely high performance as well as multiple deployment modes. This document introduces the three deployment schemes of Nginx Ingress on Tencent Kubernetes Engine (TKE): Deployment + LB, Daemonset + HostNetwork + LB, and Deployment + LB directly connected to Pod and their deployment methods.
Nginx Ingress is an implementation of Kubernetes Ingress. By watching the Ingress resources of Kubernetes clusters, it converts Ingress rules into an Nginx configuration to enable Nginx to perform Layer-7 traffic forwarding, as shown in the figure below:
Nginx Ingress can be implemented in the following two modes. This document mainly introduces the implementation of Kubernetes in the open-source community:
Based on a comparison of the three deployment solutions for Nginx Ingress on TKE, this document offers the following selection suggestions:
The simplest way to deploy Nginx Ingress on TKE is to deploy Nginx Ingress Controller in Deployment mode and create a LoadBalancer-type Service for it (automatically creating a CLB or binding an existing CLB) to enable the CLB to receive external traffic and forward it into Nginx Ingress, as shown in the figure below:
Currently, by default, a LoadBalancer-type Service on TKE is implemented based on NodePort: the CLB binds the NodePort of each node as the RS (Real Server) and forwards traffic to the NodePort of each node. Then through Iptables or IPVS, nodes route requests to the corresponding backend pod of the Service (namely the pod of Nginx Ingress Controller). Subsequently, if nodes are added or deleted, the CLB will automatically update the node NodePort binding.
Run the following commands to install Nginx Ingress:
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment.yaml -n nginx-ingress
In solution 1, traffic passes through a NodePort layer, introducing one more layer for forwarding, which leads to the following issues:
In solution 2, the following solution is proposed:
Nginx Ingress uses hostNetwork, and the CLB is directly bound with node IP address + port (80,443), without passing through NodePort. With the use of hostNetwork, the pods of Nginx Ingress cannot be scheduled to the same node. To avoid port listening conflicts, you can preselect some nodes as edge nodes dedicated to the deployment of Nginx Ingress and label them. Then, Nginx Ingress can be deployed as a DaemonSet on these nodes. The following figure shows the architecture:
To install Nginx Ingress, perform the following steps:
kubectl label node 10.0.0.3 nginx-ingress=true
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-daemonset-hostnetwork.yaml -n nginx-ingress
In solution 3, the following solution is proposed:
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment-eni.yaml -n nginx-ingress
In solution 2: Daemonset + HostNetwork + LB, the CLB is manually managed. When creating a CLB, you can select public network or private network. In solution 1: Deployment + LB and solution 3: Deployment + LB directly connected to pod, public network CLBs are created by default.
To use a private network, you can redeploy YAML and add a key to the Service in nginx-ingress-controller, for example, service.kubernetes.io/qcloud-loadbalancer-internal-subnetid
, with value set to the annotation of the subnet ID created by the private network CLB. Refer to the following code:
apiVersion: v1
kind: Service
metadata:
annotations:
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxxx # value should be replaced with a subnet ID in the VPC where the cluster belongs.
labels:
app: nginx-ingress
component: controller
name: nginx-ingress-controller
In solution 1: Deployment + LB and solution 3: Deployment + LB directly connected to Pod, new CLBs are automatically created by default. The traffic entry address of Ingress depends on the IP address of the newly created CLB. If a business is dependent upon the entry address, you can bind Nginx Ingress with an existing CLB.
The specific method is to redeploy YAML and add a key to the Service in nginx-ingress-controller, such as service.kubernetes.io/tke-existed-lbid
, with value set to the annotation of the CLB ID. Refer to the following code:
apiVersion: v1
kind: Service
metadata:
annotations:
service.kubernetes.io/tke-existed-lbid: lb-6swtxxxx # value should be replaced with the CLB ID.
labels:
app: nginx-ingress
component: controller
name: nginx-ingress-controller
There are two types of Tencent Cloud accounts: bill-by-IP accounts and bill-by-CVM accounts:
Note:
You can refer to Distinguishing Between Tencent Cloud Account Types to identify your account type.
When you deploy Nginx Ingress on TKE and need to use Nginx Ingress to manage Ingress, if you cannot create an Ingress on the TKE console, you can use YAML to create an Ingress and you need to specify the annotation of Ingress Class for each Ingress. Refer to the following code:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx # this is the key part
spec:
rules:
- host: *
http:
paths:
- path: /
backend:
serviceName: nginx-v1
servicePort: 80
For Nginx Ingress installed through the method in How can I create an Ingress, the metrics port has been opened and can be used for Prometheus collection. If prometheus-operator is installed in the cluster, you can use ServiceMonitor to collect monitoring data for Nginx Ingress. Refer to the following code:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: nginx-ingress-controller
namespace: nginx-ingress
labels:
app: nginx-ingress
component: controller
spec:
endpoints:
- port: metrics
interval: 10s
namespaceSelector:
matchNames:
- nginx-ingress
selector:
matchLabels:
app: nginx-ingress
component: controller
For native Prometheus configuration, refer to the following code:
- job_name: nginx-ingress
scrape_interval: 5s
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- nginx-ingress
relabel_configs:
- action: keep
source_labels:
- __meta_kubernetes_service_label_app
- __meta_kubernetes_service_label_component
regex: nginx-ingress;controller
- action: keep
source_labels:
- __meta_kubernetes_endpoint_port_name
regex: metrics
After collecting monitoring data, you can configure the dashboards provided by the Nginx Ingress community for grafana and display data.
In actual operation, you can directly copy JSON data and import it to grafana to import dashboards. nginx.json
is used to display the various regular monitoring dashboards for Nginx Ingress, as shown in the figure below:request-handling-performance.json
is used to display the performance monitoring dashboard of Nginx Ingress, as shown in the figure below:
Apakah halaman ini membantu?