Updated on 2024-01-26 GMT+08:00

Services

Direct Access to a Pod

After a pod is created, the following problems may occur if you directly access the pod:

  • The pod can be deleted and recreated at any time by a controller such as a Deployment, and the result of accessing the pod becomes unpredictable.
  • The IP address of the pod is allocated only after the pod is started. Before the pod is started, the IP address of the pod is unknown.
  • An application is usually composed of multiple pods that run the same image. Accessing pods one by one is not efficient.

For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in Figure 1. Three pods are running in the backend, which are independent and replaceable. When a backend pod is re-created, the new pod is assigned with a new IP address, of which the frontend pod is unaware.

Figure 1 Inter-pod access

Using Services for Pod Access

Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address. (When a CCE cluster is created, a Service CIDR block is set, which is used to allocate IP addresses to Services.) A Service forwards requests accessing the Service to pods based on labels, and at the same time, perform load balancing for these pods.

In the preceding example, a Service is added for the frontend pod to access the backend pods. In this way, the frontend pod does not need to be aware of the changes on backend pods, as shown in Figure 2.

Figure 2 Accessing pods through a Service

Creating Backend Pods

Create a Deployment with three replicas, that is, three pods with label app: nginx.

apiVersion: apps/v1      
kind: Deployment         
metadata:
  name: nginx            
spec:
  replicas: 3                    
  selector:              
    matchLabels:
      app: nginx
  template:              
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: container-0
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
      imagePullSecrets:
      - name: default-secret

Creating a Service

In the following example, we create a Service named nginx, and use a selector to select the pod with the label app:nginx. The port of the target pod is port 80 while the exposed port of the Service is port 8080.

The Service can be accessed using Service name:Exposed port. In the example, nginx:8080 is used. In this case, other pods can access the pod associated with nginx using nginx:8080.

apiVersion: v1
kind: Service
metadata:
  name: nginx        #Service name
spec:
  selector:          #Label selector, which selects pods with the label of app=nginx
    app: nginx
  ports:
  - name: service0
    targetPort: 80   #Pod port
    port: 8080       #Service external port
    protocol: TCP    #Forwarding protocol type. The value can be TCP or UDP.
  type: ClusterIP    #Service type

Save the Service definition to nginx-svc.yaml and use kubectl to create the Service.

$ kubectl create -f nginx-svc.yaml
service/nginx created

$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.247.0.1       <none>        443/TCP    7h19m
nginx        ClusterIP   10.247.124.252   <none>        8080/TCP   5h48m

You can see that the Service has a ClusterIP, which is fixed unless the Service is deleted. You can use this ClusterIP to access the Service inside the cluster.

Create a pod and use the ClusterIP to access the pod. Information similar to the following is returned.

$ kubectl run -i --tty --image nginx:alpine test --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # curl 10.247.124.252:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Using ServiceName to Access a Service

After the DNS resolves the domain name, you can use ServiceName:Port to access the Service, the most common practice in Kubernetes. When you are creating a CCE cluster, you are required to install the coredns add-on by default. You can view the pods of CoreDNS in the kube-system namespace.

$ kubectl get po --namespace=kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-7689f8bdf-295rk                   1/1     Running   0          9m11s
coredns-7689f8bdf-h7n68                   1/1     Running   0          11m

After coredns is installed, it becomes a DNS. After the Service is created, coredns records the Service name and IP address. In this way, the pod can obtain the Service IP address by querying the Service name from coredns.

nginx.<namespace>.svc.cluster.local is used to access the Service. nginx is the Service name, <namespace> is the namespace, and svc.cluster.local is the domain name suffix. In actual use, you can omit <namespace>.svc.cluster.local in the same namespace and use the ServiceName.

For example, if the Service named nginx is created, you can access the Service through nginx:8080 and then access backend pods.

An advantage of using ServiceName is that you can write ServiceName into the program when developing the application. In this way, you do not need to know the IP address of a specific Service.

Now, create a pod and access the pod. Query the IP address of the nginx Service domain name, which is 10.247.124.252. Access the domain name of the pod and information similar to the following is returned.

$ kubectl run -i --tty --image tutum/dnsutils dnsutils --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server:		10.247.3.10
Address:	10.247.3.10#53

Name:	nginx.default.svc.cluster.local
Address: 10.247.124.252

/ # curl nginx:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Using Services for Service Discovery

After a Service is deployed, it can discover the pod no matter how the pod changes.

If you run the kubectl describe command to query the Service, information similar to the following is displayed:

$ kubectl describe svc nginx
Name:              nginx
......
Endpoints:         172.16.2.132:80,172.16.3.6:80,172.16.3.7:80
......

One Endpoints record is displayed. An endpoint is also a resource object in Kubernetes. Kubernetes monitors the pod IP addresses through endpoints so that a Service can discover pods.

$ kubectl get endpoints
NAME         ENDPOINTS                                     AGE
kubernetes   192.168.0.127:5444                            7h19m
nginx        172.16.2.132:80,172.16.3.6:80,172.16.3.7:80   5h48m

In this example, 172.16.2.132:80 is the IP:port of the pod. You can run the following command to view the IP address of the pod, which is the same as the preceding IP address.

$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE         
nginx-869759589d-dnknn   1/1     Running   0          5h40m   172.16.3.7     192.168.0.212
nginx-869759589d-fcxhh   1/1     Running   0          5h40m   172.16.3.6     192.168.0.212
nginx-869759589d-r69kh   1/1     Running   0          5h40m   172.16.2.132   192.168.0.94

If a pod is deleted, the Deployment re-creates the pod and the IP address of the new pod changes.

$ kubectl delete po nginx-869759589d-dnknn
pod "nginx-869759589d-dnknn" deleted

$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE         
nginx-869759589d-fcxhh   1/1     Running   0          5h41m   172.16.3.6     192.168.0.212
nginx-869759589d-r69kh   1/1     Running   0          5h41m   172.16.2.132   192.168.0.94 
nginx-869759589d-w98wg   1/1     Running   0          7s      172.16.3.10    192.168.0.212

Check the endpoints again. You can see that the content under ENDPOINTS changes with the pod.

$ kubectl get endpoints
NAME         ENDPOINTS                                      AGE
kubernetes   192.168.0.127:5444                             7h20m
nginx        172.16.2.132:80,172.16.3.10:80,172.16.3.6:80   5h49m

Let's take a closer look at how this happens.

We have introduced kube-proxy on worker nodes in Figure 2. Actually, all Service-related operations are performed by kube-proxy. When a Service is created, Kubernetes allocates an IP address to the Service and notifies kube-proxy on all nodes of the Service creation through the API server. After receiving the notification, each kube-proxy records the relationship between the Service and the IP address/port pair through iptables. In this way, the Service can be queried on each node.

The following figure shows how a Service is accessed. Pod X accesses the Service (10.247.124.252:8080). When pod X sends data packets, the destination IP:Port is replaced with the IP:Port of pod 1 based on the iptables rule. In this way, the real backend pod can be accessed through the Service.

In addition to recording the relationship between Services and IP address/port pairs, kube-proxy also monitors the changes of Services and endpoints to ensure that pods can be accessed through Services after pods are rebuilt.

Figure 3 Service access process

Service Types and Application Scenarios

Services of the ClusterIP, NodePort, LoadBalancer, and None types have different functions.

  • ClusterIP: used to make the Service only reachable from within a cluster.
  • NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node. For details, see NodePort Services.
  • LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, to which a load balancer routes, and external systems only need to access the load balancer. For details, see LoadBalancer Services.
  • None: used for mutual discovery between pods. This type of Service is also called headless Service. For details, see Headless Service.

NodePort Services

A NodePort Service enables each node in a Kubernetes cluster to reserve the same port. External systems first access the node IP:Port and then the NodePort Service forwards the requests to the pod corresponding to the Service.

Figure 4 NodePort Service
The following is an example of creating a NodePort Service. After the Service is created, you can access backend pods through IP:Port of the node.
apiVersion: v1
kind: Service
metadata:
  name: nodeport-service
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
    nodePort: 30120
  selector:
    app: nginx

Create and view the Service. The value of PORT for the NodePort Service is 8080:30120/TCP, indicating that port 8080 of the Service is mapped to port 30120 of the node.

$ kubectl create -f nodeport.yaml 
service/nodeport-service created

$ kubectl get svc -o wide
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes         ClusterIP   10.247.0.1       <none>        443/TCP          107m   <none>
nginx              ClusterIP   10.247.124.252   <none>        8080/TCP         16m    app=nginx
nodeport-service   NodePort    10.247.210.174   <none>        8080:30120/TCP   17s    app=nginx

Access the Service by using Node IP:Port number to access the pod.

$ kubectl run -i --tty --image nginx:alpine test --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # curl 192.168.0.212:30120
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......

LoadBalancer Services

A Service is exposed externally using a load balancer that forwards requests to the NodePort of the node.

Load balancers are not a Kubernetes component. Different cloud service providers have different load balancers. For example, CCE interconnects with Elastic Load Balance (ELB) for you to use load balancers. As a result, there are different implementation methods of creating a LoadBalancer Service.

Figure 5 LoadBalancer Service
The following is an example of creating a LoadBalancer Service. After the LoadBalancer Service is created, you can access backend pods through IP:Port of the load balancer.
apiVersion: v1 
kind: Service 
metadata: 
  annotations:   
    kubernetes.io/elb.id: 3c7caa5a-a641-4bff-801a-feace27424b6
  labels: 
    app: nginx 
  name: nginx 
spec: 
  loadBalancerIP: 10.78.42.242     # IP address of the load balancer
  ports: 
  - name: service0 
    port: 80
    protocol: TCP 
    targetPort: 80
    nodePort: 30120
  selector: 
    app: nginx 
  type: LoadBalancer    # Service type (LoadBalancer)

The parameters in annotations under metadata are required for CCE LoadBalancer Services. They specify the load balancer to which the Service is bound. CCE also allows you to create a load balancer when creating a LoadBalancer Service. For details, see LoadBalancer.

Headless Service

The preceding types of Services allow internal and external pod access, but not the following scenarios:

  • Accessing all pods at the same time
  • Pods in a Service accessing each other

This is where headless Service come into service. A headless Service does not create a cluster IP address, and the DNS records of all pods are returned during query. In this way, the IP addresses of all pods can be queried. StatefulSets in StatefulSet use headless Services to support mutual access between pods.

apiVersion: v1
kind: Service       # Object type (Service)
metadata:
  name: nginx-headless
  labels:
    app: nginx
spec:
  ports:
    - name: nginx     # Name of the port for communication between pods
      port: 80        # Port number for communication between pods
  selector:
    app: nginx        # Select the pod whose label is app:nginx.
  clusterIP: None     # Set this parameter to None, indicating the headless Service.

Run the following command to create a headless Service:

# kubectl create -f headless.yaml 
service/nginx-headless created

After the Service is created, you can query the Service.

# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
nginx-headless   ClusterIP   None         <none>        80/TCP    5s

Create a pod to query the DNS. You can view the records of all pods. In this way, all pods can be accessed.

$ kubectl run -i --tty --image tutum/dnsutils dnsutils --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-0.nginx
Server:         10.247.3.10
Address:        10.247.3.10#53
Name:   nginx-0.nginx.default.svc.cluster.local
Address: 172.16.0.31

/ # nslookup nginx-1.nginx
Server:         10.247.3.10
Address:        10.247.3.10#53
Name:   nginx-1.nginx.default.svc.cluster.local
Address: 172.16.0.18

/ # nslookup nginx-2.nginx
Server:         10.247.3.10
Address:        10.247.3.10#53
Name:   nginx-2.nginx.default.svc.cluster.local
Address: 172.16.0.19