Why Can't the ELB Address Be Used to Access Workloads in a Cluster?
Symptom
In a cluster (on a node or in a container), the ELB address cannot be used to access workloads.
Possible Cause
upstream connect error or disconnect/reset before headers. reset reason: connection failure Or curl: (7) Failed to connect to 192.168.10.36 port 900: Connection refused
It is common that a load balancer in a cluster cannot be accessed. The reason is as follows: When Kubernetes creates a Service, kube-proxy adds the access address of the load balancer as an external IP address (External-IP, as shown in the following command output) to iptables or IPVS. If a client inside the cluster initiates a request to access the load balancer, the address is considered as the external IP address of the Service, and the request is directly forwarded by kube-proxy without passing through the load balancer outside the cluster.
- For a multi-pod workload, ensure that all pods are accessible. Otherwise, there is a possibility that the access to the workload fails.
- The table lists only the scenarios where the access may fail. Other scenarios that are not listed in the table indicate that the access is normal.
|
Service Type Released on the Server |
Access Type |
Request Initiation Location on the Client |
Tunnel Network Cluster (IPVS) |
VPC Network Cluster (IPVS) |
Tunnel Network Cluster (iptables) |
VPC Network Cluster (iptables) |
|---|---|---|---|---|---|---|
|
NodePort Service |
Public/Private network |
Same node as the service pod |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
|
Different nodes from the service pod |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
The access is successful. |
The access is successful. |
||
|
Other containers on the same node as the service pod |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
The access failed. |
||
|
Other containers on different nodes from the service pod |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
||
|
LoadBalancer Service using a shared load balancer |
Private network |
Same node as the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
|
Other containers on the same node as the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
||
|
DNAT gateway Service |
Public network |
Same node as the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
|
Different nodes from the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
||
|
Other containers on the same node as the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
||
|
Other containers on different nodes from the service pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
||
|
LoadBalancer Service using a Dedicated load balancer (Local) for interconnection with NGINX Ingress Controller |
Private network |
Same node as cceaddon-nginx-ingress-controller pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
|
Other containers on the same node as the cceaddon-nginx-ingress-controller pod |
The access failed. |
The access failed. |
The access failed. |
The access failed. |
Solution
The following methods can be used to solve this problem:
- (Recommended) In the cluster, use the ClusterIP Service or service domain name for access.
- Set externalTrafficPolicy of the Service to Cluster, which means cluster-level service affinity. Note that this affects source address persistence.
apiVersion: v1 kind: Service metadata: annotations: kubernetes.io/elb.class: union kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"traffic","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' labels: app: nginx name: nginx spec: externalTrafficPolicy: Cluster ports: - name: service0 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.