Allowing Nodes Outside a Cluster in the Same VPC to Access the Pod IP Addresses in the Cluster
Background
CCE nodes can directly access the pod IP address of each container through the Kubernetes network. However, it is not the common case that VMs outside a cluster in the same VPC access the pod IP address of a container in that cluster.
Pod IP addresses are commonly used communication inside a cluster. If external communication is required, Services are usually used.
However, in customer scenarios, for example, when Consul is used, the customer's pod registers its IP address with Consul when being started. As a result, all IP addresses used in the entire architecture are pod IP addresses. Therefore, it is necessary to discuss the communication method in such scenarios.
Prerequisites
Nodes outside the cluster and the cluster are in the same VPC.
Scenarios
Scenario 1: VPC Network Model
Procedure
- Ensure that the network model of the cluster is VPC network. Log in to the CCE console and choose Resource Management > Clusters. On the cluster list, click the name of the cluster to be operated. On the Cluster Details page, check that the network model of the cluster is VPC network.Figure 1 Viewing the network model (VPC network)
- View and record the pod IP address and the IP address of the node where the pod is located. On the CCE console, view the pod IP address on the Pods tab page on the workload details page.Figure 2 Viewing the pod IP address
- Add the ICMP protocol.
- Choose Service List > Network > Virtual Private Cloud.
- In the navigation pane on the left, choose Access Control > Security Groups. Click the security group name to view its details.
- On the Inbound Rules tab page, click Add Rule and set Protocol & Port to ICMP. Figure 3 Adding an inbound rule
- Log in to a node outside the cluster in the same VPC and access the container IP address.

Scenario 2: Container Tunnel Network (Recommended for Non-Production Environments)
Generally, VMs in the same VPC are connected. Therefore, you can add routes. Specifically, configure a route whose destination address is the container CIDR block on the VM that needs to access the pod IP address and send the packet to the specified node.
Procedure
- Ensure that the network model of the cluster is Tunnel network.
Log in to the CCE console and choose Resource Management > Clusters. On the cluster list, click the name of the target cluster. On the Cluster Details page, check that the network model of the cluster is Tunnel network and the container CIDR block is 172.16.0.0/16, as shown in Figure 4.
- View and record the pod IP address and the IP address of the node where the pod is located.
On the CCE console, view the pod IP address on the Pods tab page on the workload details page.
- Add a route.
Select a node in the cluster as the gateway. For example, use the IP address 192.168.0.121 of the node where the pod is located, as shown in Figure 5.
Add a route in either of the following ways:
- For the same subnet of the same VPC, run the native route command on the Linux VM that needs to access the pod.
route add -net 172.16.0.0/16 gw 192.168.0.121
After this command is run, the packets whose destination IP addresses are in the CIDR block 172.18.0.0/16 are sent to the gateway 192.168.0.121. This method can be used to specific nodes. However, the following error message is displayed when you add different subnets in the same VPC. This method is not applicable to hosts in different subnets in the same VPC.

- For different subnets in the same VPC, the native route command cannot be used on the Linux VM. You need to add routes to a route table on the HUAWEI CLOUD VPC console.
- Choose Service List > Network > Virtual Private Cloud.
- Click the VPC name in the VPC list.
- In the Networking Components area on the right, click the number next to Route Tables. On the displayed Route Tables page, click the route table name and then click Add Route under Routes.
- In the Add Route dialog box, set Destination to 172.16.0.0/16, Next Hop Type to Server, and Next Hop to **-**-***(192.168.0.121). Figure 6 Adding a route
As shown in the preceding figure, this method applies to all nodes in the VPC. Compared with the route command in method 1, this method does not have a fine granularity.
Ensure that the added route does not conflict with an existing route. If a conflict exists, some network access requests may fail.
- For the same subnet of the same VPC, run the native route command on the Linux VM that needs to access the pod.
- Add firewall rules.
HUAWEI CLOUD ECS has its own firewall and security group rules. Therefore, after adding a route, you need to enable corresponding security group rules to allow traffic to pass.
The security group rules to be enabled vary in different scenarios. In this example, HTTP port 80 is required. Therefore, you only need to allow traffic on port 80 in the inbound rule list of CCE nodes.
- Choose Service List > Network > Virtual Private Cloud.
- In the navigation pane on the left, choose Access Control > Security Groups. Click the security group name to view its details.
- On the Inbound Rules tab page, click Add Rule, set Protocol & Port to Custom TCP and 80.
Figure 7 Viewing the pod IP address
For security, the CIDR block of the peer end can be narrowed down to the VPC CIDR block.
- Disable source address verification for ECS NICs.
The ECS NIC verifies the source address. The source address of the packet returned by the CCE nodes is the pod IP address. Therefore, the packet is intercepted. You need to disable the verification function on CCE nodes.
Choose Service List > Computing > Elastic Cloud Server and click the name of the target cluster node to view the node details. On the NICs tab page, disable Source/Destination Check for the local and peer ECSs.
Figure 8 Disabling Source/Destination Check
- Perform verification test.
If the preceding steps are performed for the Nginx container on the node whose IP address is 192.168.0.121, you can directly access the pod IP address from a node outside the cluster.
Figure 9 Pod IP address accessed successfully
Conclusion:
- In the scenario where container tunnel network is used, you need to properly plan the internal CIDR block and ensure that no virtual CIDR block conflict exists in the internal network.
- Generally, the VPC network works better for a node outside a cluster in the same VPC to access a pod inside the cluster.


Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.