tencent cloud

masukan

Using NodeLocal DNS Cache in a TKE Cluster

Terakhir diperbarui:2024-08-16 15:25:31

    Use Cases

    In scenarios where users adopt the Kubernetes standard service discovery mechanism, if the queries per second (QPS) of CoreDNS requests is too high, it may lead to increased DNS query latency and uneven load, adversely affecting business performance and stability.
    For this scenario, you can deploy NodeLocal DNS Cache to reduce the pressure of CoreDNS requests, improving the DNS resolution performance and stability within the cluster. This document will detail how to install and use NodeLocal DNS Cache in a Tencent Kubernetes Engine (TKE) cluster.

    Use Limits

    Pods deployed on the super node are not currently supported.
    Pods with the network mode of Cilium Overlay and the independent ENI mode are not currently supported.
    NodeLocal DNS Cache is currently used only as a CoreDNS cache proxy and does not support configuration of other plugins. If needed, configure CoreDNS directly.

    How It Works

    Community Solutions

    NodeLocal DNS Cache of the community version deploys a hostNetwork pod on each node in the cluster using DaemonSet. The pod is named node-local-dns, which can cache DNS requests for the pod on this node. In case of cache misses, this pod will make a request to the upstream kube-dns service using a TCP connection to fetch the information. The principle diagram is as follows:
    
    The effect can vary with different forwarding modes of kube-proxy.
    In iptables mode, after NodeLocal DNS Cache is deployed, both existing pods and incremental pods can seamlessly switch to access the local DNS cache.
    In IPVS mode, neither existing pods nor incremental pods can seamlessly switch to access the DNS cache. To use the NodeLocal DNS Cache service in IPVS mode, you can use the following two methods:
    Method 1: Modify the kubelet parameter --cluster-dns to point to 169.254.20.10, and then restart the kubelet service. This method carries a risk of business interruption.
    Method 2: Modify DNSConfig of the pod to point to the new address 169.254.20.10, and use the local DNS cache to handle DNS resolution.

    NodeLocal DNS Cache Solution in TKE

    The NodeLocal DNS Cache solution in TKE is enhanced for deficiencies of the community version in IPVS mode. For incremental pods, DNSConfig will be automatically configured for the local DNS caching capability. However, automatic switching for existing pods is still unavailable. Explicit operations (rebuilding pods or manually configuring DNSConfig) of the user are required.
    How it works:
    

    Installing NodeLocal DNS Cache in the Console

    You can deploy and install NodeLocal DNS Cache using component management of TKE as follows:
    1. Log in to the TKE console, and choose Cluster from the left navigation bar.
    2. In the Cluster list, click the target cluster ID to access the cluster details page.
    3. Select Add-on Management from the left-side menu, and click Create on the component management page.
    4. On the Create Add-on Management page, check the NodeLocalDNSCache box, as shown in the figure below.
    
    5. Click Done.
    6. Go back to the Add-on Management list page, and check that the localdns component is set to the Succeeded state, as shown in the figure below.
    

    Using NodeLocal DNS Cache

    In iptables clusters and IPVS clusters, NodeLocal DNS Cache is used in different ways. The specific descriptions are as follows:

    iptables Clusters

    Existing pods: Users do not need to do anything. Existing pods can directly use the local DNS caching capability to resolve DNS requests.
    Incremental pods: Users do not need to do anything. Incremental pods can directly use the local DNS caching capability to resolve DNS requests.

    IPVS Clusters

    For IPVS clusters, TKE will dynamically inject the DNSConfig configuration into newly created pods and set dnsPolicy to None to avoid manual pod YAML configuration. The automatically injected configuration is as follows:
    dnsConfig:
    nameservers:
    - 169.254.20.10
    - 10.23.1.234
    options:
    - name: ndots
    value: "3"
    - name: attempts
    value: "2"
    - name: timeout
    value: "1"
    searches:
    - default.svc.cluster.local
    - svc.cluster.local
    - cluster.local
    dnsPolicy: None
    Note:
    To automatically inject DNSConfig into the corresponding pod, ensure the following conditions are met:
    1. Label the namespace where the pod is located with the tag localdns-injector=enabled.
    For example, to automatically inject DNSConfig into the newly created pods in the default namespace, perform the following configuration:
    kubectl label namespace default localdns-injector=enabled
    2. Ensure the pod is not in the kube-system and kube-public namespaces. DNSConfig will not be automatically injected into pods in these two namespaces.
    3. Ensure the pod label does not contain **localdns-injector=disabled**. Pods with this label will not be injected with DNSConfig.
    4. If the network of the newly created pod is configured to non-hostNetwork, configure DNSPolicy to ClusterFirst. If the pod network is hostNetwork, configure DNSPolicy to ClusterFirstWithHostNet.
    5. The GR network mode is not currently supported.
    Existing pods: Existing pods cannot be seamlessly switched for now. To enable the local DNS caching proxy capability for existing pods, users need to rebuild pods. After rebuilding, pods will be automatically injected with DNSConfig to use the local DNS caching capability for DNS request resolution.
    Incremental pods: Once the above precautions are met, incremental pods will be automatically injected with DNSConfig to access the node 169.254.20.10:53 and use the local DNS caching capability for DNS request resolution.

    Verifying NodeLocal DNS Cache

    After NodeLocal DNS Cache is successfully enabled, you can verify on the node whether pod access to CoreDNS services has been resolved using the local DNS cache. Below are the methods to verify the effect of enabling NodeLocal DNS Cache in both the iptables cluster and the IPVS cluster.
    Note:
    If you want to verify whether NodeLocal DNS Cache on the node is the proxy for DNS requests of this node according to logs, you need to modify the ConfigMap configuration of node-local-dns in the kube-system namespace and add the log capability to the corresponding Corefile configuration, as shown in the figure below.
    

    iptables Cluster Verification

    In the iptables cluster, it is necessary to verify whether existing pods and incremental pods can automatically have their DNS requests proxied by local NodeLocal DNS Cache.

    Existing Pods

    1. Log in to an existing pod.
    2. Run the nslookup command to resolve the SVC file of kube-dns, as shown in the figure below.
    
    3. Check the logs of the node-cache pod on this node, as shown in the figure below.
    
    You can confirm that the DNS resolution requests from existing pods to kube-dns are proxied by the NodeLocal DNS Cache service on this node.

    Incremental Pods

    1. Log in to a newly created pod.
    2. Run the nslookup command to resolve the SVC file of kube-dns, as shown in the figure below.
    
    3. Check the logs of the node-cache pod on this node, as shown in the figure below.
    
    You can confirm that the DNS resolution requests from incremental pods to kube-dns are proxied by the NodeLocal DNS Cache service on this node.

    IPVS Cluster Verification

    In the IPVS cluster, existing pods cannot automatically switch to use the local DNS cache temporarily. It is necessary to verify whether the incremental pods can automatically proxy their DNS requests using the local NodeLocal DNS Cache. The steps are as follows:
    1. Add the label localdns-injector=enabled to the required namespace.
    2. In the required namespace, create an incremental pod, and confirm the pod is injected with DNSConfig, as shown in the figure below.
    
    3. Log in to a newly created pod.
    4. Run the nslookup command to resolve the SVC file of kube-dns, as shown in the figure below.
    
    5. Check the logs of the node-cache pod on this node, as shown in the figure below.
    
    You can confirm that the DNS resolution requests from incremental pods in the IPVS cluster to kube-dns are proxied by the NodeLocal DNS Cache service on this node.

    Uninstalling NodeLocal DNS Cache

    1. Log in to the TKE console, and choose Cluster from the left navigation bar.
    2. In the Cluster list, click the target cluster ID to access the cluster details page.
    3. Select Add-on Management from the left-side menu. On the Add-on Management page, click Delete on the right side of the row where the component to be deleted is located, as shown in the figure below.
    

    FAQs

    prefer_udp Related Configurations

    Problem Description

    In the TKE cluster, CoreDNS uses the default DNS service (183.60.83.19/183.60.82.98) of Tencent Cloud as the upstream DNS. The default DNS service of Tencent Cloud supports DNS requests for private domain resolution in the VPC. Currently, only the UDP protocol but not the TCP protocol is supported. However, NodeLocal DNS connects to CoreDNS using TCP by default. If CoreDNS is not configured with prefer_udp, it will default to accessing the upstream Tencent Cloud DNS service using TCP, which can lead to occasional domain name resolution failures.

    Solution

    1. New TKE clusters: CoreDNS has been configured with prefer_udp by default, so users do not need to take any action.
    2. Existing clusters: If users have deployed the NodeLocal DNS Cache component, it is recommended to configure the relevant Corefile of the CoreDNS service to add prefer_udp and reload the configuration, as shown in the figure below.
    
    3. Clusters without NodeLocal DNS Cache installed: Before installing NodeLocal DNS Cache components, check whether the prefer_udp configuration is added to the CoreDNS Corefile. Users need to manually configure it before continuing with the installation of the NodeLocal DNS Cache components.

    kube-proxy Version Compatibility Issues

    Problem Description

    In TKE clusters, earlier versions of kube-proxy have the multi-backend issue for iptables (legacy/nftable). The triggering conditions are as follows:
    1. The proxy mode of kube-proxy in the cluster is iptables.
    2. In Kubernetes clusters of different versions, the versions of kube-proxy in the clusters are earlier than the version numbers below.
    TKE Cluster Version
    Fix Policy
    1.24
    Upgrade kube-proxy to v1.24.4-tke.5 or later
    1.22
    Upgrade kube-proxy to v1.22.5-tke.11 or later
    1.20
    Upgrade kube-proxy to v1.20.6-tke.31 or later
    1.18
    Upgrade kube-proxy to v1.18.4-tke.35 or later
    1.16
    Upgrade kube-proxy to v1.16.3-tke.34 or later
    1.14
    Upgrade kube-proxy to v1.14.3-tke.28 or later
    1.12
    Upgrade kube-proxy to v1.12.4-tke.32 or later
    1.10
    Upgrade kube-proxy to v1.10.5-tke.20 or later
    At this time, if the customer deploys the NodeLocal DNS Cache component, a multi-backend issue may be triggered, causing the service access failure in the cluster.

    Solution

    1. If the user's existing cluster configuration meets the above triggering conditions, it is recommended to upgrade the kube-proxy version to the latest version.
    2. When the NodeLocal DNS Cache component needs to be installed in the current TKE cluster, the kube-proxy version will be checked. If the version does not meet the requirements, the user will be prohibited from installing the component. In this case, upgrade the kube-proxy version to the latest version.
    For the latest kube-proxy version, refer to TKE Kubernetes Revision Version History.
    
    Hubungi Kami

    Hubungi tim penjualan atau penasihat bisnis kami untuk membantu bisnis Anda.

    Dukungan Teknis

    Buka tiket jika Anda mencari bantuan lebih lanjut. Tiket kami tersedia 7x24.

    Dukungan Telepon 7x24