Networking
Overview
This section describes how you can:
- Specify a default DNS server for the pods scheduled to CCI.
- Use a Service to enable communications between the pods in a CCE cluster and the pods in CCI.
- Use a Service to expose pods in CCI.
Constraints
- Networking cannot be enabled for CCE clusters that use a shared VPC.
- If the bursting add-on is used to schedule the pods to CCI 2.0, dedicated load balancers can be configured for ingresses and Services of the LoadBalancer type. The bursting add-on of a version earlier than 1.5.5 does not support Services of the LoadBalancer type.
- Networking depends on the startup of the sidecar containers. To use this feature, upgrade the bursting add-on to 1.5.28 or later.
- postStart must be configured for service containers.
- To use networking during container initialization, you need to enable this feature by following the instructions in Enabling Container Networking.
- Before enabling networking and container networking initialization, wait until the related component is ready and then deliver the workloads. Otherwise, networking will be abnormal.
- Pods deployed across CCE and CCI can only communicate through ClusterIP Services. CCE ClusterIP Services cannot be accessed init containers.
- When you interconnect pods deployed across CCE and CCI with a LoadBalancer Service or ingress:
- Do not specify the health check port. In a CCE cluster, CCI containers and CCE containers use different backend ports registered with ELB. If you specify a health check port, some backend health checks will be abnormal.
- Ensure that the health check method will not impact service access if different clusters use a Service to connect to the listener of the same ELB load balancer.
- Allow traffic from the container port for 100.125.0.0/16 in the node security group when you interconnect pods deployed across CCE and CCI with a shared LoadBalancer Service or ingress.
Specifying a Default DNS Server
Scenario
In some scenarios, you need to specify a default DNS server for the pods scheduled to CCI. The bursting add-on allows you to specify a DNS server address without the need to configure the dnsConfig field for each pod, reducing network O&M costs.
Procedure
- Log in to a CCE cluster node and edit the YAML file.
kubectl edit deploy cceaddon-virtual-kubelet-virtual-kubelet -nkube-system
- Add --cluster-dns=x.x.x.x to the startup parameters and replace x.x.x.x with the DNS server address.
- Save the modification and wait for the virtual-kubelet workload to restart.
- Verify the DNS server address.
Run the exec command to access a container in CCI and check whether the preferred nameserver in /etc/resolv.conf is the address configured for cluster-dns.
Table 1 Constraints in different application scenarios Application Scenario
Constraints
There are pods running in CCI before the DNS server address is specified.
- The DNS server address is only available for new pods that are scheduled to CCI.
- To make the DNS server address available for the pods that are running before the modification, these pods need to be redeployed.
There is a limit for cluster-dns
- You can specify a maximum of three name servers in dnsConfig.
- Ensure that the sum of the nameserver value in cluster-dns and the nameservers value in spec.dnsConfig does not exceed 3.
Using a Service to Enable Communications Between Pods in a CCE Cluster and Pods in CCI
- Install the bursting add-on and enable Networking.
After the installation is successful, a load balancer is automatically created in your account. You can view the load balancer on the networking console.
- Create a pod in CCI and configure a Service to expose the pod.
- Obtain the access mode of the pod on the CCE cluster console.
-
Create a pod in CCE and configure a Service to expose the pod. For details, see 2.
Do not select the label for pods scheduled to CCI.
- Verify network connectivity.
Create a pod in CCI and select an image that allows the curl command, for example, centos.
Access the pod on the CCI console and check whether CCI can access CCE through the Service.
Figure 1 Service for accessing the pod in CCE - Create a pod in CCE and select an image (for example, CentOS) that allows the curl command. Then, check whether CCE can access CCI through the Service.
Figure 2 Service for accessing the pod in CCI
Enabling Container Networking
Container networking is disabled by default. To use this feature, you need to enable networking for the add-on on the console and in the YAML file.
- Log in to the CCE console.
- Click the name of the target CCE cluster to go to the cluster console.
- In the navigation pane, choose Add-ons.
- Select the CCE Cloud Bursting Engine for CCI add-on and click Edit.
Figure 3 CCE Cloud Bursting Engine for CCI
- Enable Networking and then click Edit YAML.
Figure 4 Editing the add-on
- Set set_proxy_as_first_initcontainer to true.
Figure 5 Modifying the parameter
- In the instance list of the bursting add-on, check whether the instance name is bursting-cceaddon-virtual-kubelet-virtual-kubelet-xxx and the status is Running. If yes, the deployment is complete, and networking is normal.
Figure 6 Checking the add-on deployment status
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot