Implementing High Availability for Containers
Basic Rules
To achieve high availability for the containers in a CCE cluster, you can:
- Deploy three master nodes for the cluster.
- Select different AZs for the nodes and customize scheduling policies to maximize resource utilization.
- Create multiple node pools in different AZs and use them for node scaling.
- Set the number of pods to be greater than 2 when creating a workload.
- Configure pod affinity rules to distribute pods to different AZs and nodes.
Procedure
Assume that there are four nodes in a cluster and they run in different AZs.
$ kubectl get node -L topology.kubernetes.io/zone,kubernetes.io/hostname NAME STATUS ROLES AGE VERSION ZONE HOSTNAME 192.168.5.112 Ready <none> 42m v1.21.7-r0-CCE21.11.1.B007 zone01 192.168.5.112 192.168.5.179 Ready <none> 42m v1.21.7-r0-CCE21.11.1.B007 zone01 192.168.5.179 192.168.5.252 Ready <none> 37m v1.21.7-r0-CCE21.11.1.B007 zone02 192.168.5.252 192.168.5.8 Ready <none> 33h v1.21.7-r0-CCE21.11.1.B007 zone03 192.168.5.8
Create a workload based on the following pod anti-affinity rules:
- Anti-affinity in an AZ. Configure the parameters as follows:
- weight: A larger weight value indicates a higher scheduling priority. In this example, set it to 50.
- topologyKey: includes the default and custom labels for specifying a topology domain during scheduling. In this example, set it to topology.kubernetes.io/zone, which identifies the AZ where a node is located.
- labelSelector: Select the pod label, which is anti-affinity for the workload
- Anti-affinity in the node topology domain. Configure the parameters as follows:
- weight: Set it to 50.
- topologyKey: Set it to kubernetes.io/hostname.
- labelSelector: Select the pod label, which is anti-affinity for the workload
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container-0
image: nginx:alpine
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector: # Select the pod label, which is anti-affinity for the workload
matchExpressions:
- key: app
operator: In
values:
- nginx
namespaces:
- default
topologyKey: topology.kubernetes.io/zone # The rule is applied within the same AZ.
- weight: 50
podAffinityTerm:
labelSelector: # Select the pod label, which has an anti-affinity rule for the workload pods
matchExpressions:
- key: app
operator: In
values:
- nginx
namespaces:
- default
topologyKey: kubernetes.io/hostname # The rule is applied to nodes.
imagePullSecrets:
- name: default-secret Create the workload and view the node where the workload pods are located.
$ kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE nginx-6fffd8d664-dpwbk 1/1 Running 0 17s 10.0.0.132 192.168.5.112 nginx-6fffd8d664-qhclc 1/1 Running 0 17s 10.0.1.133 192.168.5.252
Increase the number of pods to 3. You can see that the newly added pod is scheduled to another node, and the three nodes where the workload pods run are in three different AZs.
$ kubectl scale --replicas=3 deploy/nginx
deployment.apps/nginx scaled
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6fffd8d664-8t7rv 1/1 Running 0 3s 10.0.0.9 192.168.5.8
nginx-6fffd8d664-dpwbk 1/1 Running 0 2m45s 10.0.0.132 192.168.5.112
nginx-6fffd8d664-qhclc 1/1 Running 0 2m45s 10.0.1.133 192.168.5.252 Increase the number of pods to 4. You can see that the newly added pod is scheduled to another node. With anti-affinity rules, pods can be evenly distributed to different AZs and nodes.
$ kubectl scale --replicas=4 deploy/nginx
deployment.apps/nginx scaled
$ kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6fffd8d664-8t7rv 1/1 Running 0 2m30s 10.0.0.9 192.168.5.8
nginx-6fffd8d664-dpwbk 1/1 Running 0 5m12s 10.0.0.132 192.168.5.112
nginx-6fffd8d664-h796b 1/1 Running 0 78s 10.0.1.5 192.168.5.179
nginx-6fffd8d664-qhclc 1/1 Running 0 5m12s 10.0.1.133 192.168.5.252 Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.

