Updated on 2025-07-29 GMT+08:00

Services

Video Tutorial

Direct Access to a Pod

After a pod is created, accessing it directly can result in certain problems:

  • The pod can be deleted and recreated at any time by a controller such as a Deployment. If the pod is recreated, access to it may fail.
  • An IP address cannot be assigned to a pod until the pod is started. Before the pod is started, its IP address is unknown.
  • Applications usually run on multiple pods that use the same image. Accessing pods one by one is not efficient.

For example, Deployments are used to deploy the frontend and backend of an application. The frontend calls the backend for computing, as shown in Figure 1. Three pods are running in the backend, and they are independent and replaceable. When a backend pod is recreated, the new pod is assigned a new IP address, but the frontend pod is unaware of this change.

Figure 1 Inter-pod access

Using Services for Pod Access

Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address. (When you create a CCE cluster, you need to specify a Service CIDR block, which is used to allocate IP addresses to Services.) A Service distributes requests across pods based on labels and balances the loads for these pods.

In the preceding example, a Service is created for the frontend pod to access the backend pods. In this way, the frontend pod does not need to be aware of the changes on backend pods, as shown in Figure 2.

Figure 2 Accessing pods through a Service

Creating Backend Pods

Create a Deployment with three replicas (three pods) with the label app: nginx.

apiVersion: apps/v1      
kind: Deployment         
metadata:
  name: nginx            
spec:
  replicas: 3                    
  selector:              
    matchLabels:
      app: nginx
  template:              
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: container-0
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
      imagePullSecrets:
      - name: default-secret

Creating a Service

In the following example, a Service named nginx is created, and a selector selects the pod with the label app:nginx. The pod uses port 80 while the Service access port is 8080.

The Service can be accessed through <Service-name>:<Service-access-port>. In this example, the access address is nginx:8080. In this case, other pods can access the pod associated with nginx using nginx:8080.

apiVersion: v1
kind: Service
metadata:
  name: nginx        #Service name
spec:
  selector:          # Label selector, which selects pods with the label app: nginx
    app: nginx
  ports:
  - name: service0
    targetPort: 80   # Pod port
    port: 8080       # Service access port
    protocol: TCP    # Forwarding protocol. The value can be TCP or UDP.
  type: ClusterIP    # Service type

Save the Service definition to nginx-svc.yaml and use kubectl to create the Service.

$ kubectl create -f nginx-svc.yaml
service/nginx created

$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.247.0.1       <none>        443/TCP    7h19m
nginx        ClusterIP   10.247.124.252   <none>        8080/TCP   5h48m

You can see that this is a ClusterIP Service, which has a fixed cluster-scoped IP address unless the Service is deleted. You can use this IP address to access the Service within the cluster.

Create a pod and use the IP address (ClusterIP) to access the pod. Information similar to the following is returned:

$ kubectl run -i --tty --image nginx:alpine test --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # curl 10.247.124.252:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Using the Service Name to Access a Service

With DNS, you can access a Service through <Service-name>:<port>. This is the most common practice in Kubernetes. When you are creating a CCE cluster, you are required to install the CoreDNS add-on. You can view the pods of CoreDNS in the kube-system namespace.

$ kubectl get po --namespace=kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-7689f8bdf-295rk                   1/1     Running   0          9m11s
coredns-7689f8bdf-h7n68                   1/1     Running   0          11m

After the add-on is installed, CoreDNS serves as a DNS server. After the Service is created, CoreDNS records the Service name and IP address. In this way, the pods can obtain the Service IP address by querying the Service name from CoreDNS.

In this example, nginx.<namespace>.svc.cluster.local is used to access the Service. nginx is the Service name, <namespace> is the namespace, and svc.cluster.local is the domain name suffix. In the same namespace, you can omit <namespace>.svc.cluster.local and only use the Service name.

For example, you can access the Service named nginx through nginx:8080 and then access backend pods.

An advantage of using the Service name is that you can write the Service name into the program when developing an application. In this way, you do not need to know the IP address of the Service.

Create a pod and enter the container. Then run the nslookup command to query the domain name resolution result. The command output shows that the domain name of the Service is nginx.default.svc.cluster.local, and the resolved IP address is 10.247.124.252. Run the curl nginx:8080 command to access the Service. If the page content is returned, the Service can be accessed.

$ kubectl run -i --tty --image tutum/dnsutils dnsutils --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server:		10.247.3.10
Address:	10.247.3.10#53

Name:	nginx.default.svc.cluster.local
Address: 10.247.124.252

/ # curl nginx:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Using Services for Service Discovery

After a Service is deployed, it can discover the pods no matter how the pods change.

If you run the kubectl describe command to query the Service, information similar to the following is displayed:

$ kubectl describe svc nginx
Name:              nginx
......
Endpoints:         172.16.2.132:80,172.16.3.6:80,172.16.3.7:80
......

A record of endpoints is displayed. An endpoint is a resource object in Kubernetes. Kubernetes monitors the pod IP addresses through endpoints so that a Service can discover pods.

$ kubectl get endpoints
NAME         ENDPOINTS                                     AGE
nginx        172.16.2.132:80,172.16.3.6:80,172.16.3.7:80   5h48m

In this example, 172.16.2.132:80, 172.16.3.6:80, and 172.16.3.7:80 are the IP addresses and ports of pods. You can run the following command to view the IP addresses of the pods, which are the same as the preceding IP addresses:

$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE         
nginx-869759589d-dnknn   1/1     Running   0          5h40m   172.16.3.7     192.168.0.212
nginx-869759589d-fcxhh   1/1     Running   0          5h40m   172.16.3.6     192.168.0.212
nginx-869759589d-r69kh   1/1     Running   0          5h40m   172.16.2.132   192.168.0.94

If a pod is deleted, the Deployment recreates the pod, and a new IP address will be assigned to the new pod.

$ kubectl delete po nginx-869759589d-dnknn
pod "nginx-869759589d-dnknn" deleted

$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE         
nginx-869759589d-fcxhh   1/1     Running   0          5h41m   172.16.3.6     192.168.0.212
nginx-869759589d-r69kh   1/1     Running   0          5h41m   172.16.2.132   192.168.0.94 
nginx-869759589d-w98wg   1/1     Running   0          7s      172.16.3.10    192.168.0.212

Check the endpoints again. You can see that the content under ENDPOINTS changes with the pod.

$ kubectl get endpoints
NAME         ENDPOINTS                                      AGE
kubernetes   192.168.0.127:5444                             7h20m
nginx        172.16.2.132:80,172.16.3.10:80,172.16.3.6:80   5h49m

Let's take a closer look at how this happens.

In section Kubernetes Cluster Architecture, we have introduced kube-proxy running on worker nodes. All Service-related operations are performed by kube-proxy. When a Service is created, Kubernetes allocates an IP address to the Service and notifies kube-proxy on all worker nodes of the Service creation through the API server. After receiving the notification, kube-proxy records the IP address and port number of the Service through iptables. In this way, the Service can be queried on each node.

The figure below shows how a Service is accessed. When pod X accesses the Service (10.247.124.252:8080), the destination IP address and port are replaced with the IP address and port of pod 1 based on the iptables rule. In this way, the real backend pod can be accessed through the Service.

In addition to recording the IP address and port of a Service, kube-proxy monitors the changes of the Service and their endpoints to ensure that pods can still be accessed through the Service after the pods are rebuilt.

Figure 3 Service access process

Service Types and Application Scenarios

There are several types of Services: ClusterIP, NodePort, LoadBalancer, and Headless Service. Different types of Services offer different functions.

  • ClusterIP: The Service is only reachable from within a cluster.
  • NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node. For details, see NodePort Services.
  • LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, and an external load balancer is used for external systems to access the backend pods. For details, see LoadBalancer Services.
  • Headless Service: used by pods to discover each other. No separate cluster IP address will be allocated to this type of Service, and the cluster will not balance loads or perform routing for it. You can create a headless Service by setting spec.clusterIP to None. For details, see Headless Services.

NodePort Services

A NodePort Service enables each node in a Kubernetes cluster to reserve the same port. External systems first access the Service through <node-IP-address>:<node-port>. The Service then forwards the requests to the pods associated with the Service.

Figure 4 A NodePort Service
Below is an example NodePort Service. After the Service is created, you can access backend pods through <node-IP-address>:<node-port>.
apiVersion: v1
kind: Service
metadata:
  name: nodeport-service
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
    nodePort: 30120
  selector:
    app: nginx

Create and view the Service. The value of PORT(S) for the NodePort Service is 8080:30120/TCP, indicating that port 8080 of the Service is mapped to port 30120 of the node.

$ kubectl create -f nodeport.yaml 
service/nodeport-service created

$ kubectl get svc -o wide
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes         ClusterIP   10.247.0.1       <none>        443/TCP          107m   <none>
nginx              ClusterIP   10.247.124.252   <none>        8080/TCP         16m    app=nginx
nodeport-service   NodePort    10.247.210.174   <none>        8080:30120/TCP   17s    app=nginx

Accessing the Service using <node-IP-address>:<node-port> can access the pod.

$ kubectl run -i --tty --image nginx:alpine test --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # curl 192.168.0.212:30120
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......

LoadBalancer Services

A LoadBalancer Service is exposed externally using a load balancer that forwards requests to a port on the node.

Kubernetes does not directly offer a load balancing component. You can interconnect your Kubernetes cluster with a load balancer of a cloud provider. Cloud providers have different load balancers. For example, CCE interconnects with Elastic Load Balance (ELB). This results in different implementations of LoadBalancer Services.

Figure 5 A LoadBalancer Service
The following is an example LoadBalancer Service. After this Service is created, you can access backend pods through <load-balancer-IP-address>:<load-balancer-listening-port>.
apiVersion: v1 
kind: Service 
metadata: 
  annotations:   
    kubernetes.io/elb.id: 3c7caa5a-a641-4bff-801a-feace27424b6
  labels: 
    app: nginx 
  name: nginx 
spec: 
  loadBalancerIP: 10.78.42.242     # IP address of the load balancer
  ports: 
  - name: service0 
    port: 80
    protocol: TCP 
    targetPort: 80
    nodePort: 30120
  selector: 
    app: nginx 
  type: LoadBalancer    # Service type. This is a LoadBalancer Service.

The parameters in annotations under metadata are required for CCE LoadBalancer Services. They specify the load balancer that a Service is associated with. When creating a LoadBalancer Service on the CCE console, you can also create a load balancer for the Service. For details, see LoadBalancer.

Headless Services

A Service allows a client to access an associated pod for both internal and external network communications. However, there are still the following problems:

  • Accessing all pods at the same time
  • Allowing pods associated with a Service to access each other

Kubernetes provides headless Services to solve these problems. When a client accesses a non-headless Service, only the cluster IP address of the Service is returned for a DNS query. The cluster forwarding rule (IPVS or iptables) determines which pod will be accessed. A headless Service is not allocated with a separate cluster IP address. During a DNS query, the DNS records of all pods will be returned. In this way, the IP address of each pod can be obtained. StatefulSets use headless Services for mutual access between pods.

apiVersion: v1
kind: Service       # Object type. This is a Service.
metadata:
  name: nginx-headless
  labels:
    app: nginx
spec:
  ports:
    - name: nginx     # Name of the port for communications between pods
      port: 80        # Port number for communications between pods
  selector:
    app: nginx        # Select the pod labeled with app:nginx.
  clusterIP: None     # Set this parameter to None, indicating that a headless Service will be created.

Run the following command to create a headless Service:

# kubectl create -f headless.yaml 
service/nginx-headless created

After the Service is created, you can query the Service.

# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
nginx-headless   ClusterIP   None         <none>        80/TCP    5s

Create a pod to query the DNS records. You can view the records of all pods. In this way, all pods can be accessed.

$ kubectl run -i --tty --image tutum/dnsutils dnsutils --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-headless
Server:         10.247.3.10
Address:        10.247.3.10#53

Name:   nginx-headless.default.svc.cluster.local
Address: 172.16.0.31
Name:   nginx-headless.default.svc.cluster.local
Address: 172.16.0.18
Name:   nginx-headless.default.svc.cluster.local
Address: 172.16.0.19