Copied.
Why Can't the ELB Address Be Used to Access Workloads in a Cluster?
Symptom
In a cluster (on a node or in a container), the ELB address cannot be used to access workloads.
Scenario 1: Service Affinity Is Node Level
Possible Cause
upstream connect error or disconnect/reset before headers. reset reason: connection failure Or curl: (7) Failed to connect to 192.168.10.36 port 900: Connection refused

- For a multi-pod workload, ensure that all pods are accessible. Otherwise, there is a possibility that the access to the workload fails.
- In a CCE Turbo cluster that utilizes Cloud Native Network 2.0, node-level affinity is supported only when the Service backend is connected to a hostNetwork pod.
- The table lists only the scenarios where the access may fail. Other scenarios that are not listed in the table indicate that the access is normal.
Service Type Released on the Server | Access Type | Request Initiation Location on the Client | Tunnel Network Cluster (IPVS) | VPC Network Cluster (IPVS) | Tunnel Network Cluster (iptables) | VPC Network Cluster (iptables) |
|---|---|---|---|---|---|---|
NodePort Service | Public/Private network | Same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. |
Different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on another node: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the client is located: The access is successful. Access the public IP address and NodePort on the node where the client is located: The access failed. Access the IP address and NodePort on another node: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the client is located: The access is successful. Access the public IP address and NodePort on the node where the client is located: The access failed. Access the IP address and NodePort on another node: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the client is located: The access is successful. Access the public IP address and NodePort on the node where the client is located: The access failed. Access the IP address and NodePort on another node: The access failed. | ||
Other containers on the same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the public IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the server is located: The access failed. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the public IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the server is located: The access failed. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | ||
Other containers on different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the client is located: The access is successful. Access the IP address and NodePort on another node: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the private IP address and NodePort on the node where the client is located: The access is successful. Access the IP address and NodePort on another node: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on any node other than the one where the server is located: The access failed. | ||
LoadBalancer Service using a shared load balancer | Private network | Same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. |
Other containers on the same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. |
Solution
The following methods can be used to solve this problem:
- (Recommended) In the cluster, use the ClusterIP Service or service domain name for access.
- Set externalTrafficPolicy of the Service to Cluster, which means cluster-level service affinity. However, this affects source IP address preservation because the cluster performs NAT. As a result, backend applications are unable to obtain the client's actual IP address.
apiVersion: v1 kind: Service metadata: annotations: kubernetes.io/elb.class: union kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"traffic","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' labels: app: nginx name: nginx spec: externalTrafficPolicy: Cluster ports: - name: service0 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer - Leveraging the pass-through feature of the Service, kube-proxy is bypassed when the ELB address is used for access. The ELB load balancer is accessed first, and then the workload. For details, see Configuring Passthrough Networking for a LoadBalancer Service.

- In a CCE standard cluster, after passthrough networking is configured using a dedicated load balancer, the private IP address of the load balancer cannot be accessed from the node where the workload pod resides or other pods on the same node as the workload.
- Passthrough networking is not supported for clusters of v1.15 or earlier.
- In IPVS network mode, the passthrough settings of Services connected to the same load balancer must be the same.
- If node-level (local) service affinity is used, kubernetes.io/elb.pass-through is automatically set to onlyLocal to enable pass-through.
apiVersion: v1 kind: Service metadata: annotations: kubernetes.io/elb.pass-through: "true" kubernetes.io/elb.class: union kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"traffic","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' labels: app: nginx name: nginx spec: externalTrafficPolicy: Local ports: - name: service0 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer
Scenario 2: A Load Balancer in a Cluster That Uses IPVS Is Reused
Possible Cause
In a cluster that uses IPVS for service forwarding, a load balancer address may become inaccessible under certain conditions.
- If a LoadBalancer ingress and a Service share the same load balancer, when the cluster tries to access the ingress, the ipvs-0 bridge will intercept the traffic and redirect it to the Service instead, causing access failures.
- If a Service in the local cluster and a Service in another cluster share the same load balancer, when the local cluster tries to access the Service in another cluster, the ipvs-0 bridge will intercept the traffic and redirect it to the Service in the local cluster instead, causing access failures.
Solution
Avoid these scenarios whenever possible. If a load balancer must be shared, enable passthrough networking for all Services associated with that load balancer so that traffic bypasses kube-proxy's IPVS forwarding. For details, see Configuring Passthrough Networking for a LoadBalancer Service.
Scenario 3: A LoadBalancer Service in a Cluster That Uses IPVS Listens on Ports
Possible Cause
A Service can use a load balancer to listen on a range of ports (for example, 30000–30005). However, inside the cluster, only the port defined in spec.ports (30000) is accessible. Requests to the other ports (30001–30005) fail because they are intercepted by the ipvs-0 bridge and redirected incorrectly.
Solution
If access to ports of a Service from within the cluster is needed, enable passthrough networking for the LoadBalancer Service that listens on ports so that traffic bypasses kube-proxy's IPVS forwarding. For details, see Configuring Passthrough Networking for a LoadBalancer Service.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot
