tencent cloud

Feedback

Registered Node Overview

Last updated: 2024-05-10 14:41:58

    Overview

    Registered nodes (third-party nodes) are a newly upgraded node product form of Tencent Kubernetes Engine (TKE) in hybrid cloud deployment. It allows users to host non-Tencent Cloud servers in the TKE cluster. Users provide computing resources while TKE is responsible for the lifecycle management of the cluster.
    Note:
    Registered nodes now support two product modes: the Direct Connect (DC) version (connected through DC and Cloud Connect Network (CCN)) and the public network version (connected through the Internet). Users can choose the proper version according to different scenarios as needed.

    Use Cases

    Resource Reuse

    An enterprise needs to migrate to the cloud. However, it has investment in local data centers, and has existing server resources (CPU resources and GPU resources) in the Internet data center (IDC). By using the feature of Registered Node, users can add IDC resources to the TKE cluster, so that the existing server resources can be effectively utilized during cloudification.

    Cluster Hosting and OPS

    Tencent Cloud conducts unified OPS and control on the local deployment and OPS cost of Kubernetes clusters, so that users only need to maintain their local servers.

    Hybrid Deployment Scheduling

    Both registered nodes and Tencent Cloud Cloud Virtual Machine (CVM) nodes can be simultaneously scheduled within a single cluster, facilitating the expansion of IDC services to CVM without the need to introduce multi-cluster management.

    Seamless Integration with Cloud Services

    Registered nodes seamlessly integrate with cloud-native services of Tencent Cloud, covering cloud-native capabilities such as logs, monitoring, auditing, storage, and container security.

    Registered Nodes with DC

    Architecture

    Users can connect their own IDC environment to Tencent Cloud Virtual Private Cloud (VPC) through DC and CCN, and then connect the IDC nodes to the TKE cluster through the private network, achieving unified management of IDC nodes and cloud-based CVM nodes. The architecture diagram is as follows:
    

    Constraints

    To ensure the stability of registered nodes, users can access registered nodes only through DC or CCN (the virtual private network (VPN) is currently not supported).
    Registered nodes must use TencentOS Server 3.1 or TencentOS Server 2.4 (TK4).
    Graphics processing unit (GPU): Only NVIDIA series of GPUs are supported, including Volta (such as V100), Turing (such as T4), and Ampere (such as A100 and A10).
    The registered node feature is only available for TKE clusters of v1.18 or later versions and the cluster must contain at least one CVM node.
    For scenarios where CVM nodes and IDC nodes are deployed, the TKE team has launched a hybrid cloud container network scheme based on Cilium-Overlay.

    Port Connectivity Configuration of Nodes

    To ensure the connectivity between CVM nodes and IDC nodes in a hybrid cloud cluster, a series of ports need to be configured on the CVM nodes and IDC nodes.
    CVM nodes: CVM nodes must use the security group settings that meet TKE requirements. If the TKE cluster uses the Cilium-Overlay network mode, additional security group rules need to be added.
    Inbound rule
    Protocol
    Port
    Source
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR
    IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    Outbound rule
    Protocol
    Port
    Destination
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR
    IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    
    IDC nodes: Configure ports on nodes for firewall rules.
    Inbound rule
    Protocol
    Port
    Source
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    TCP
    10250
    VXLAN communication is allowed for cluster nodes.
    Allow
    API server communication is allowed.
    Outbound rule
    Protocol
    Port
    Destination
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    TCP
    80, 443, 9243, 10250, 60002
    VPC subnet CIDR with proxy
    Allow
    Tencent Cloud proxy communication is allowed.

    Network Mode

    For different network modes of TKE clusters, there are certain limitations of pod network capabilities as follows:
    For clusters using the GlobalRouter or VPC-CNI exclusive ENI mode: Pods on IDC nodes can only use the hostNetwork network mode. In this mode, pods of IDC nodes can communicate with CVM nodes only through the host network.
    For clusters using the Cilium-Overlay network mode: This is a container network solution specially designed by the TKE team for hybrid cloud scenarios. Both CVM pods and IDC pods are on the same overlay network plane and can communicate with each other.

    Registered Node with Public Network (Internet Version)

    Architecture

    If a user fails to establish DC between the IDC and Tencent Cloud due to certain objective factors, but still wants to manage the IDC nodes through TKE to reduce the deployment and OPS cost of Kubernetes, the user can use the registered nodes of the public network version to register the IDC nodes to TKE for unified management over the Internet. The architecture diagram is as follows:
    
    Note:
    Unlike the DC version, the public network version can only communicate with TKE through the Internet. By default, the CVM nodes and IDC nodes are two completely isolated partitions, and the pods in CVM nodes cannot communicate with pods in IDC nodes through networks. Therefore, it is recommended that users manage and schedule IDC nodes as a separate node pool to prevent communication between CVM pods and IDC pods.

    Constraints

    Before using registered nodes of the public network version, it is necessary to ensure that the environment meets the constraint requirements. Otherwise, the product features may be abnormal.
    Operating system: Registered nodes of the public network version must use TencentOS Server 3.1 and TencentOS Server 2.4 (TK4).
    TKE cluster:
    The Kubernetes version must be 1.20 or later.
    Set "Container network add-on" to Global Router.
    There must be at least one CVM node.
    Network: IDC nodes can communicate with Tencent Cloud Cloud Load Balancer (CLB) and can access TCP ports 443 and 9000 of CLB.
    Hardware (GPU): GPU nodes are currently not supported.

    Port Connectivity Configuration of Nodes

    To ensure the connectivity between the Tencent Cloud and IDC through Internet, a series of ports need to be configured on the CVM nodes and IDC nodes.
    CVM nodes: CVM nodes must use the security group settings that meet TKE requirements. If the TKE cluster uses the Cilium-Overlay network mode, additional security group rules need to be added.
    IDC nodes: Configure ports on nodes for firewall rules to allow access to ports and also configure rules to allow access to the public network image repository.
    Image repository: Ensure that ccr.ccs.tencentyun.com and superedge.tencentcloudcr.com can be accessed on IDC nodes.
    Inbound rule
    Protocol
    Port
    Source
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    TCP
    10250
    VXLAN communication is allowed for cluster nodes.
    Allow
    API server communication is allowed.
    Outbound rule
    Protocol
    Port
    Destination
    Policy
    Remarks
    UDP
    8472
    Cluster CIDR IDC CIDR
    Allow
    VXLAN communication is allowed for cluster nodes.
    TCP
    443, 9000
    IP address of CLB
    Allow
    The CLB address can be accessed to provide the node registration and cloud-edge tunnel services.

    Network Mode

    For the public network version of registered nodes, the network between CVM and IDC are naturally isolated. Therefore, CVM nodes use the Global Router (GR) network to achieve pod communication between CVM nodes, while the IDC nodes use the Flannel network to achieve pod communication between IDC nodes. By default, CVM pods are isolated from IDC pods.

    Comparison of Capabilities Between Registered Nodes and TKE Nodes

    Class
    Capability
    TKE Node
    Registered Node (DC)
    Registered Node (Public Network)
    Node management
    Node adding
    Node removal
    Setting of node tags and taints
    Node draining and cordoning
    Batch node management in the node pool
    Kubernetes upgrade
    Partial support
    Partial support
    Storage volume
    Local storage (emptyDir, hostPath, etc)
    Kubernetes API (ConfigMap, Secret, etc)
    Cloud Block Storage (CBS)
    -
    -
    Cloud File Storage (CFS)
    -
    Cloud Object Storage (COS)
    -
    Observability
    Prometheus monitoring
    -
    Cloud product monitoring
    -
    -
    CLS
    -
    Cluster audit
    Event storage
    Service
    ClusterIP service
    NodePort service
    LoadBalancer service
    -
    CLB ingress
    -
    Nginx ingress
    -
    Others
    qGPU
    -
    -

    

    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support