Service

Direct Access to a Pod

How can I access a workload after it is created? Accessing a workload is to access a pod. However, the following problems may occur when you access a pod directly:

  • The pod can be deleted and recreated at any time by a controller such as a Deployment, and the result of accessing the pod becomes unpredictable.
  • The IP address of the pod is allocated only after the pod is started. Before the pod is started, the IP address of the pod is unknown.
  • An application is usually composed of multiple pods that run the same image. Accessing pods one by one is low efficiency.

For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in Figure 1. Three pods are running in the backend, which are independent and replaceable. When a backend pod is recreated, the new pod is assigned with a new IP address and the frontend pod is unaware.

Figure 1 Inter-workload access

How Do Services Work

Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address and forwards the traffic to the pods based on labels. In addition, the Service can perform load balancing for these pods.

In the preceding example, two Services are added for accessing the frontend and backend pods. In this way, the frontend pod does not need to sense changes on backend pods, as shown in Figure 2.

Figure 2 Accessing pods through a Service

Creating a Service

In the following example, create a Service named nginx, and use a selector to select the pod with the label of app:nginx. The port of the target pod is port 80 while the exposed port of the Service is port 8080.

The Service can be accessed using Service name:Exposed port. In the example, nginx:8080 is used. In this case, other workloads can access the pod associated with nginx using nginx:8080.

apiVersion: v1
kind: Service
metadata:
  name: nginx        #Service name
spec:
  selector:          #Label selector, which selects pods with the label of app=nginx
    app: nginx
  ports:
  - name: service0
    targetPort: 80   #Pod port
    port: 8080       #Service external port
    protocol: TCP    #Forwarding protocol type. The value can be TCP or UDP.
  type: ClusterIP    #Service type

NodePort Services are supported in native Kubernetes, but not supported in CCI.

Save the Service definition to nginx-svc.yaml and use kubectl to create the Service.

# kubectl create -f nginx-svc.yaml -n $namespace_name
service/nginx created

# kubectl get svc -n $namespace_name
NAME       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.247.9.190     <none>        53/UDP,53/TCP   7m
nginx      ClusterIP   10.247.148.137   <none>        8080/TCP        1h

You can see that the Service has a ClusterIP, which is fixed unless the Service is deleted. You can use this ClusterIP to access the Service internally.

kube-dns is a Service reserved for domain name resolution. It is automatically created in CCI. For details about domain name resolution, see Using ServiceName to Access a Service.

Using ServiceName to Access a Service

In CCI, you can use the coredns add-on to resolve the domain name for a Service, and use ServiceName:Port to access to the Service. This is the most common mode in Kubernetes. For details about how to install coredns, see Add-on Management.

After coredns is installed, it becomes a DNS. After the Service is created, coredns records the Service name and IP address. In this way, the pod can obtain the Service IP address by querying the Service name from coredns.

nginx.<namespace>.svc.cluster.local is used to access the Service. nginx is the Service name, <namespace> is the namespace, and svc.cluster.local is the domain name suffix. In actual use, you can omit <namespace>.svc.cluster.local and use the Service name.

For example, if the Service named nginx is created, you can access the Service through nginx:8080 and then access backend pods.

An advantage of using ServiceName is that you can write ServiceName into the program when developing the application. In this way, you do not need to know the IP address of a specific Service.

The coredns add-on occupies computing resources. It runs two pods, with each pod occupies 0.5 core CPU and 1 GB memory. You need to pay for the resources.

LoadBalancer Services

You have known that you can create ClusterIP Services. You can access backend pods of the Service through the IP address.

CCI also supports LoadBalancer Services. You can bind an enhanced load balancer to a Service. In this way, the traffic for accessing the load balancer is forwarded to the Service.

Enhanced load balancers can be divided into private network load balancers and public network load balancers based on IP addresses. The difference is that a public IP address is bound to a public network load balancer. You can select load balancers as required. You can create an enhanced load balancer by using the API or the ELB console.

The enhanced load balancer must be in the same VPC as the Service. Otherwise, the enhanced load balancer cannot be bound.

Figure 3 LoadBalancer Service
The following is an example of creating a LoadBalancer Service. After an enhanced load balancer is created, you can access backend pods through IP:port of the enhanced load balancer.
apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    kubernetes.io/elb.id: 77e6246c-a091-xxxx-xxxx-789baa571280  #ID of the enhanced load balancer
spec:
  selector:
    app: nginx
  ports:
  - name: service0
    targetPort: 80
    port: 8080         #Enhanced load balancer's access port
    protocol: TCP
  type: LoadBalancer   #Service type