Deploying Storage Volumes in Multiple AZs
Application Scenarios
- Deploying services in specific AZs within a cluster that has nodes running in multiple AZs
- Preventing faults caused by a lack of resources in a single AZ with multi-AZ deployment
Deploying storage volumes in multiple AZs reduces application interruptions during rollout and ensures the stability of key systems and applications in case of any faults.
Prerequisites
- You have created a cluster with the CCE Container Storage (Everest) add-on installed and the cluster version is 1.21 or later. If no cluster is available, create one by referring to Buying a CCE Standard/Turbo Cluster.
- Nodes in the cluster must be in at least three different AZs. If the current nodes are not in three different AZs, you need to create new nodes or node pools in AZs that have no such resources deployed.
Procedure
- Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.
- Create a StorageClass YAML file.
vi storageclass.yaml
Enter the following content in the storageclass.yaml file: (The following shows only a template for StorageClass configuration. You can modify it as required.)
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: test-disk-topology-alltype provisioner: everest-csi-provisioner parameters: csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io csi.storage.k8s.io/fstype: ext4 everest.io/disk-volume-type: SAS #A high I/O EVS disk everest.io/passthrough: "true" reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
Table 1 StorageClass parameters Parameter
Description
provisioner
Specifies the storage resource provider, which is the Everest add-on for CCE. Set this parameter to everest-csi-provisioner.
parameters
Specifies the storage parameters, which vary with storage types.
NOTICE:everest.io/disk-volume-type indicates the cloud disk type, which can be any of the following:
- SAS: high I/O
- SSD: ultra-high I/O
- GPSSD: general purpose SSD
- ESSD: extreme SSD
- GPSSD2: general purpose SSD v2, which is supported when the Everest version is 2.4.4 or later and the everest.io/disk-iops and everest.io/disk-throughput annotations are configured.
- ESSD2: extreme SSD v2, which is supported when the Everest version is 2.4.4 or later and the everest.io/disk-iops annotation is configured.
reclaimPolicy
Specifies the value of persistentVolumeReclaimPolicy for creating a PV. The value can be Delete or Retain. If reclaimPolicy is not specified when a StorageClass object is created, the value defaults to Delete.
- Delete: indicates that a dynamically provisioned PV will be automatically deleted when the PVC is deleted.
- Retain: indicates that a dynamically provisioned PV will be retained when the PVC is deleted.
allowVolumeExpansion
Specifies whether the PV of this StorageClass supports dynamic capacity expansion. The default value is false. Dynamic capacity expansion is implemented by the underlying storage add-on. This is only a switch.
volumeBindingMode
Specifies when a PV is dynamically provisioned. The value can be Immediate or WaitForFirstConsumer.
- Immediate: The PV is dynamically provisioned when a PVC is created.
- WaitForFirstConsumer: The PV is dynamically provisioned when the PVC is used by the workload.
- Create the StorageClass.
kubectl create -f storageclass.yaml
- Create a StatefulSet YAML file.
vi statefulset.yaml
Enter the following content in the statefulset.yaml file: (The following shows only a template for the standard StatefulSet configuration. You can customize it as required.)
apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: replicas: 3 serviceName: "nginx" selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: topologySpreadConstraints: - labelSelector: # Used to search for matched pods and count the pods that match the label selector to determine the number of pods in the corresponding topology domain. matchLabels: app: nginx maxSkew: 1 # Maximum difference between the numbers of matched pods in any two topology domains in a given topology type. topologyKey: topology.kubernetes.io/zone # Key of a node label whenUnsatisfiable: DoNotSchedule # How the scheduler processes pods when the pods do not meet the spread constraints containers: - image: nginx:latest name: nginx env: - name: NGINX_ROOT_PASSWORD value: "nginx" volumeMounts: - name: disk-csi mountPath: /var/lib/nginx imagePullSecrets: - name: default-secret tolerations: - key: "app" operator: "Exists" effect: "NoSchedule" volumeClaimTemplates: # EVS disks are automatically created based on the specified number of replicas for quick expansion. - metadata: name: disk-csi spec: accessModes: [ "ReadWriteOnce" ] # EVS disks can only be mounted to and accessed by a single node in read/write mode, that is, ReadWriteOnce. storageClassName: test-disk-topology-alltype resources: requests: storage: 40Gi
Table 2 StatefulSet parameters Parameter
Description
topologySpreadConstraints
Specifies the topology spread constraints, which are used to control how pods are spread across a cluster among topology domains, such as regions, AZs, nodes, and other custom topology domains. For details, see Pod Topology Spread Constraints.
topologySpreadConstraints.labelSelector
Used to search for matched pods. The number of pods that match this label selector is counted to determine the number of pods in the corresponding topology domain.
topologySpreadConstraints.maxSkew
Specifies the maximum difference between the numbers of matched pods in any two topology domains in a given topology type. The value must be greater than 0 and is used to indicate how much uneven distribution of pods is allowed.
topologySpreadConstraints.topologyKey
Specifies the key of a node label. If two nodes use this key and have the same label value, the scheduler treats them as being in the same topology domain and tries to schedule an equal number of pods to each domain.
topologySpreadConstraints.whenUnsatisfiable
Specifies how the scheduler processes pods when the pods do not meet the spread constraints. The value can be:
- DoNotSchedule (Default): If a pod does not meet the spread constraints, it will not be scheduled.
- ScheduleAnyway: If a pod does not meet the spread constraints, it will be scheduled to the node with the minimum skew preferentially.
volumeClaimTemplates
Specifies that EVS disks are automatically created based on the specified number of replicas for quick expansion.
- Create the StatefulSet.
kubectl create -f statefulset.yaml
Verification
The following shows how to verify that the dynamically created PVs are in different AZs along with the pods.
- View the new PVs.
kubectl get pv
The command output is as follows: (The first three PVs are dynamically created with the pods.)
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-699eda75 40Gi RWO Delete Bound default/disk-csi-nginx-0 test-disk-topology-alltype <unset> 132m pvc-6c68f5a7 40Gi RWO Delete Bound default/disk-csi-nginx-1 test-disk-topology-alltype <unset> 131m pvc-8f74ce3a 40Gi RWO Delete Bound default/disk-csi-nginx-2 test-disk-topology-alltype <unset> 131m pvc-f738f8aa 10Gi RWO Delete Bound default/pvc csi-disk <unset> 6d4h
- Check the AZs where the PVs are located based on the PV names.
kubectl describe pv pvc-699eda75 pvc-6c68f5a7 pvc-8f74ce3a | grep zone
The command output is as follows: (The three PVs are in different AZs to enable multi-AZ deployment of storage volumes.)
Labels: failure-domain.beta.kubernetes.io/zone=cn-east-3d Term 0: failure-domain.beta.kubernetes.io/zone in [cn-east-3d] Labels: failure-domain.beta.kubernetes.io/zone=cn-east-3b Term 0: failure-domain.beta.kubernetes.io/zone in [cn-east-3b] Labels: failure-domain.beta.kubernetes.io/zone=cn-east-3c Term 0: failure-domain.beta.kubernetes.io/zone in [cn-east-3c]
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot