Updated on 2025-10-21 GMT+08:00

Networking

Overview

This section describes how you can:

  • Specify a default DNS server for the pods scheduled to CCI.
  • Use Services to enable communications between the pods in a CCE cluster and the pods in CCI.
  • Use a Service to expose pods in CCI.

Constraints

  • Networking cannot be enabled for CCE clusters that use a shared VPC.
  • If the bursting add-on is used to schedule workloads to CCI 2.0, dedicated load balancers can be configured for ingresses and Services of the LoadBalancer type. The bursting add-on of a version earlier than 1.5.5 does not support Services of the LoadBalancer type.
  • Networking depends on the startup of the sidecar containers. To use this feature, you need to upgrade the bursting add-on to 1.5.28 or later.
    • PostStart must be configured for service containers.
    • To use networking during container initialization, you need to enable this feature by following the instructions in Enabling Init Container Networking.
  • To enable networking and init container networking, you need to deliver the workloads after related components are ready, or networking will be abnormal.
  • Pods deployed across CCE and CCI can only communicate through ClusterIP Services. CCE ClusterIP Services cannot be accessed init containers.
  • When you associate the pods with Services or ingresses of the LoadBalancer type:
    • You must not specify the health check port. After a workload in a CCE cluster is scheduled to CCI, the pods in CCI use different backend ports from the pods in CCE. If you specify a health check port, the health check of some pods will be abnormal.
    • If the Services are associated with the same listener of the same load balancer, you need to confirm the health check settings to prevent access exceptions.
    • If the pods in a CCE standard cluster use a LoadBalancer Service that has a shared load balancer associated, you need to configure the security group of the CCE cluster nodes to allow traffic from 100.125.0.0/16 over the container ports.

Specifying a Default DNS Server

Scenario

In some scenarios, you need to specify a default DNS server for the pods scheduled to CCI. The bursting add-on allows you to specify a DNS server address so that you do not need to configure the dnsConfig field for each pod, reducing network O&M costs.

Procedure

  1. Log in to a CCE cluster node and edit the YAML file.
    kubectl edit deploy cceaddon-virtual-kubelet-virtual-kubelet -nkube-system
  1. Add --cluster-dns=x.x.x.x to the startup parameters and replace x.x.x.x with the DNS server address.
  2. Save the modification and wait for the virtual-kubelet workload to restart.

  3. Verify the DNS server address.
    Run the exec command to access a container in CCI and check whether the preferred nameserver in /etc/resolv.conf is the address configured for cluster-dns.
    Table 1 Constraints in different application scenarios

    Application Scenario

    Constraints

    There are workloads scheduled to CCI before the DNS server address is specified.

    • The DNS server address is only available for new workloads that are scheduled to CCI.
    • To make the DNS server address available for the workloads that are scheduled to CCI before the modification, you need to rebuild these workloads.

    There is a limit for cluster-dns

    • You can specify a maximum of three name servers in dnsConfig.
    • Ensure that the sum of the nameserver value in cluster-dns and the nameservers value in spec.dnsConfig does not exceed 3.

Using a Service to Enable Communications Between Pods in a CCE Cluster and Pods in CCI

  1. Install the bursting add-on and enable Networking.

    After the installation is successful, a load balancer is automatically created in your account. You can view the load balancer on the networking console.

  2. Create a pod in CCI and configure a Service to expose the pod.
    • To facilitate verification, select the nginx image that uses port 80.
    • When you create a Service, select the option to automatically create a load balancer to avoid conflicts with the load balancer created for the bursting add-on.

  3. Obtain the access mode of the pod on the CCE cluster console.
  4. Create a pod in CCE and configure a Service to expose the pod. For details, see 2.

    Do not select the label for the pods scheduled to CCI.

  5. Verify network connectivity.

    Create a pod in CCI and select an image that allows the curl commands, for example, centos.

    Access the pod on the CCI console and check whether CCI can access CCE through the Service.

    Figure 1 Service for accessing the pod in CCE
  6. Create a pod in CCE and select an image (for example, centos) that allows the curl commands. Then, check whether CCE can access CCI through the Service.
    Figure 2 Service for accessing the pod in CCI

Enabling Init Container Networking

By default, init container networking is disabled. To use this feature, take the following steps after you enable networking for the add-on:

  1. Log in to the CCE console.
  2. Click the name of the target CCE cluster to go to the cluster console.
  3. In the navigation pane, choose Add-ons.
  4. Select the CCE Cloud Bursting Engine for CCI add-on and click Edit.
    Figure 3 CCE Cloud Bursting Engine for CCI
  5. Enable Networking and then click Edit YAML.
    Figure 4 Editing the add-on
  6. Set set_proxy_as_first_initcontainer to true.
    Figure 5 Modifying the parameter
  7. In the instance list of the bursting add-on, check whether the instance name is bursting-cceaddon-virtual-kubelet-virtual-kubelet-xxx and the status is Running. If yes, the deployment is complete, and networking is normal.
    Figure 6 Checking the add-on deployment status