Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine/ Best Practices/ Storage/ Deploying Storage Volumes in Multiple AZs

Deploying Storage Volumes in Multiple AZs

Updated on 2024-12-28 GMT+08:00

Application Scenarios

  • Deploying services in specific AZs within a cluster that has nodes running in multiple AZs
  • Preventing faults caused by a lack of resources in a single AZ with multi-AZ deployment

Deploying storage volumes in multiple AZs reduces application interruptions during rollout and ensures the stability of key systems and applications in case of any faults.

Prerequisites

  • You have created a cluster with the CCE Container Storage (Everest) add-on installed and the cluster version is 1.21 or later. If no cluster is available, create one by referring to Buying a CCE Standard/Turbo Cluster.
  • Nodes in the cluster must be in at least three different AZs. If the current nodes are not in three different AZs, you need to create new nodes or node pools in AZs that have no such resources deployed.

Procedure

  1. Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create a StorageClass YAML file.

    vi storageclass.yaml

    Enter the following content in the storageclass.yaml file: (The following shows only a template for StorageClass configuration. You can modify it as required.)

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: test-disk-topology-alltype
    provisioner: everest-csi-provisioner
    parameters:
      csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io
      csi.storage.k8s.io/fstype: ext4  
      everest.io/disk-volume-type: SAS #A high I/O EVS disk
      everest.io/passthrough: "true"
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    Table 1 StorageClass parameters

    Parameter

    Description

    provisioner

    Specifies the storage resource provider, which is the Everest add-on for CCE. Set this parameter to everest-csi-provisioner.

    parameters

    Specifies the storage parameters, which vary with storage types.

    NOTICE:

    everest.io/disk-volume-type indicates the cloud disk type, which can be any of the following:

    • SAS: high I/O
    • SSD: ultra-high I/O
    • GPSSD: general purpose SSD
    • ESSD: extreme SSD
    • GPSSD2: general purpose SSD v2, which is supported when the Everest version is 2.4.4 or later and the everest.io/disk-iops and everest.io/disk-throughput annotations are configured.
    • ESSD2: extreme SSD v2, which is supported when the Everest version is 2.4.4 or later and the everest.io/disk-iops annotation is configured.

    reclaimPolicy

    Specifies the value of persistentVolumeReclaimPolicy for creating a PV. The value can be Delete or Retain. If reclaimPolicy is not specified when a StorageClass object is created, the value defaults to Delete.

    • Delete: indicates that a dynamically provisioned PV will be automatically deleted when the PVC is deleted.
    • Retain: indicates that a dynamically provisioned PV will be retained when the PVC is deleted.

    allowVolumeExpansion

    Specifies whether the PV of this StorageClass supports dynamic capacity expansion. The default value is false. Dynamic capacity expansion is implemented by the underlying storage add-on. This is only a switch.

    volumeBindingMode

    Specifies when a PV is dynamically provisioned. The value can be Immediate or WaitForFirstConsumer.

    • Immediate: The PV is dynamically provisioned when a PVC is created.
    • WaitForFirstConsumer: The PV is dynamically provisioned when the PVC is used by the workload.

  3. Create the StorageClass.

    kubectl create -f storageclass.yaml

  4. Create a StatefulSet YAML file.

    vi statefulset.yaml

    Enter the following content in the statefulset.yaml file: (The following shows only a template for the standard StatefulSet configuration. You can customize it as required.)

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nginx 
    spec:
      replicas: 3
      serviceName: "nginx"
      selector:
        matchLabels: 
          app: nginx
      template:
        metadata:
          labels: 
            app: nginx
        spec:
          topologySpreadConstraints:
          - labelSelector:   # Used to search for matched pods and count the pods that match the label selector to determine the number of pods in the corresponding topology domain.
              matchLabels:
                app: nginx
            maxSkew: 1       # Maximum difference between the numbers of matched pods in any two topology domains in a given topology type.
            topologyKey: topology.kubernetes.io/zone    # Key of a node label
            whenUnsatisfiable: DoNotSchedule    # How the scheduler processes pods when the pods do not meet the spread constraints
          containers:
          - image: nginx:latest
            name: nginx
            env:
            - name: NGINX_ROOT_PASSWORD
              value: "nginx"
            volumeMounts: 
            - name: disk-csi
              mountPath: /var/lib/nginx
          imagePullSecrets:
          - name: default-secret
          tolerations:
          - key: "app"
            operator: "Exists"
            effect: "NoSchedule"
      volumeClaimTemplates:    # EVS disks are automatically created based on the specified number of replicas for quick expansion.
      - metadata:
          name: disk-csi
        spec:
          accessModes: [ "ReadWriteOnce" ]   # EVS disks can only be mounted to and accessed by a single node in read/write mode, that is, ReadWriteOnce.
          storageClassName: test-disk-topology-alltype
          resources:
            requests:
              storage: 40Gi
    Table 2 StatefulSet parameters

    Parameter

    Description

    topologySpreadConstraints

    Specifies the topology spread constraints, which are used to control how pods are spread across a cluster among topology domains, such as regions, AZs, nodes, and other custom topology domains. For details, see Pod Topology Spread Constraints.

    topologySpreadConstraints.labelSelector

    Used to search for matched pods. The number of pods that match this label selector is counted to determine the number of pods in the corresponding topology domain.

    topologySpreadConstraints.maxSkew

    Specifies the maximum difference between the numbers of matched pods in any two topology domains in a given topology type. The value must be greater than 0 and is used to indicate how much uneven distribution of pods is allowed.

    topologySpreadConstraints.topologyKey

    Specifies the key of a node label. If two nodes use this key and have the same label value, the scheduler treats them as being in the same topology domain and tries to schedule an equal number of pods to each domain.

    topologySpreadConstraints.whenUnsatisfiable

    Specifies how the scheduler processes pods when the pods do not meet the spread constraints. The value can be:

    • DoNotSchedule (Default): If a pod does not meet the spread constraints, it will not be scheduled.
    • ScheduleAnyway: If a pod does not meet the spread constraints, it will be scheduled to the node with the minimum skew preferentially.

    volumeClaimTemplates

    Specifies that EVS disks are automatically created based on the specified number of replicas for quick expansion.

  5. Create the StatefulSet.

    kubectl create -f statefulset.yaml

Verification

The following shows how to verify that the dynamically created PVs are in different AZs along with the pods.

  1. View the new PVs.

    kubectl get pv

    The command output is as follows: (The first three PVs are dynamically created with the pods.)

    NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS                 VOLUMEATTRIBUTESCLASS   REASON   AGE
    pvc-699eda75   40Gi       RWO            Delete           Bound    default/disk-csi-nginx-0   test-disk-topology-alltype   <unset>                          132m
    pvc-6c68f5a7   40Gi       RWO            Delete           Bound    default/disk-csi-nginx-1   test-disk-topology-alltype   <unset>                          131m
    pvc-8f74ce3a   40Gi       RWO            Delete           Bound    default/disk-csi-nginx-2   test-disk-topology-alltype   <unset>                          131m
    pvc-f738f8aa   10Gi       RWO            Delete           Bound    default/pvc                csi-disk                     <unset>                          6d4h

  2. Check the AZs where the PVs are located based on the PV names.

    kubectl describe pv pvc-699eda75 pvc-6c68f5a7 pvc-8f74ce3a | grep zone

    The command output is as follows: (The three PVs are in different AZs to enable multi-AZ deployment of storage volumes.)

    Labels:            failure-domain.beta.kubernetes.io/zone=cn-east-3d
        Term 0:        failure-domain.beta.kubernetes.io/zone in [cn-east-3d]
    Labels:            failure-domain.beta.kubernetes.io/zone=cn-east-3b
        Term 0:        failure-domain.beta.kubernetes.io/zone in [cn-east-3b]
    Labels:            failure-domain.beta.kubernetes.io/zone=cn-east-3c
        Term 0:        failure-domain.beta.kubernetes.io/zone in [cn-east-3c]

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback