tencent cloud

masukan

High Availability Configuration Optimization

Terakhir diperbarui:2024-08-12 17:48:23

    Overview

    This document describes the high-availability deployment configuration methods for Nginx Ingress.

    Increasing the Number of Replicas

    Configure automatic scaling:
    controller:
    autoscaling:
    enabled: true
    minReplicas: 10
    maxReplicas: 100
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
    behavior: # Quick scale-out to handle traffic peaks, slow scale-in to leave a buffer to avoid abnormal traffic
    scaleUp:
    stabilizationWindowSeconds: 300
    policies:
    - type: Percent
    value: 900
    periodSeconds: 15 # Allowing scale-out up to 9 times the current number of replicas every 15 seconds
    scaleDown:
    stabilizationWindowSeconds: 300
    policies:
    - type: Pods
    value: 1
    periodSeconds: 600 # Allowing scale-in of only one pod at most every 10 minutes
    If you want a fixed number of replicas, directly configure replicaCount:
    controller:
    replicaCount: 50

    Spreading Scheduling

    Use topology distribution constraints to spread out pods for disaster recovery and avoid single points of failure:
    controller:
    topologySpreadConstraints: # Policy to maximize spreading
    - labelSelector:
    matchLabels:
    app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
    app.kubernetes.io/instance: '{{ .Release.Name }}'
    app.kubernetes.io/component: controller
    topologyKey: topology.kubernetes.io/zone
    maxSkew: 1
    whenUnsatisfiable: ScheduleAnyway
    - labelSelector:
    matchLabels:
    app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
    app.kubernetes.io/instance: '{{ .Release.Name }}'
    app.kubernetes.io/component: controller
    topologyKey: kubernetes.io/hostname
    maxSkew: 1
    whenUnsatisfiable: ScheduleAnyway

    Scheduling to Dedicated Nodes

    Typically, the load of Nginx Ingress Controller is proportional to the traffic. Considering its importance as a gateway, we recommend scheduling it to dedicated nodes or super nodes to avoid interfering with business pods or being interfered by them.
    Schedule it to the specified node pool:
    controller:
    nodeSelector:
    tke.cloud.tencent.com/nodepool-id: np-********
    Note:
    Super nodes perform better as all pods exclusively occupy the virtual machine without mutual interference. If you are using a serverless cluster, there is no need to configure the scheduling policy here, as it will only be scheduled to super nodes.

    Setting Reasonable requests and limits

    If Nginx Ingress is not scheduled to super nodes, set requests and limits reasonably to ensure sufficient resources while avoiding excessive resource usage that leads to high node load:
    controller:
    resources:
    requests:
    cpu: 500m
    memory: 512Mi
    limits:
    cpu: 1000m
    memory: 1Gi
    If you are using super nodes or a serverless cluster, you need to define requests only, that is, declare the virtual machine specifications for each pod:
    controller:
    resources:
    requests:
    cpu: 1000m
    memory: 2Gi
    
    Hubungi Kami

    Hubungi tim penjualan atau penasihat bisnis kami untuk membantu bisnis Anda.

    Dukungan Teknis

    Buka tiket jika Anda mencari bantuan lebih lanjut. Tiket kami tersedia 7x24.

    Dukungan Telepon 7x24