Help Center> Cloud Container Engine> User Guide> Old Console> Storage (CSI)> EVS Volumes> (kubectl) Creating a PV from an Existing EVS Disk

(kubectl) Creating a PV from an Existing EVS Disk

Scenario

CCE allows you to create a PersistentVolume (PV) using an existing EVS disk. After the PV is created, you can create a PersistentVolumeClaim (PVC) and bind it to the PV.

Prerequisites

Notes and Constraints

The following configuration example applies to clusters of Kubernetes 1.15 or later.

Procedure

  1. Log in to the EVS console, create an EVS disk, and record the volume ID, capacity, and disk type of the EVS disk.
  2. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  3. Create two YAML files for creating the PersistentVolume (PV) and PersistentVolumeClaim (PVC). Assume that the file names are pv-evs-example.yaml and pvc-evs-example.yaml.

    touch pv-evs-example.yaml pvc-evs-example.yaml

    • vi pv-evs-example.yaml
      Example YAML file for the PV:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        labels:
          failure-domain.beta.kubernetes.io/region: cn-north-4
          failure-domain.beta.kubernetes.io/zone: cn-north-4b
        annotations:
          pv.kubernetes.io/provisioned-by: everest-csi-provisioner
        name: pv-evs-example
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 10Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: pvc-evs-example
          namespace: default
        csi:
          driver: disk.csi.everest.io
          fsType: ext4
          volumeAttributes:
            everest.io/disk-mode: SCSI
            everest.io/disk-volume-type: SSD
            storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner
          volumeHandle: 0992dbda-6340-470e-a74e-4f0db288ed82
        persistentVolumeReclaimPolicy: Delete
        storageClassName: csi-disk
      Table 1 Key parameters

      Parameter

      Description

      everest.io/disk-volume-type

      EVS disk type. All letters are in uppercase.

      Supported values: High I/O (SAS) and Ultra-high I/O (SSD)

      failure-domain.beta.kubernetes.io/region

      Region where the cluster is located.

      For details about the value of region, see Regions and Endpoints.

      failure-domain.beta.kubernetes.io/zone

      AZ where the EVS volume is created. It must be the same as the AZ planned for the workload.

      For details about the value of zone, see Regions and Endpoints.

      storage

      EVS volume capacity in the unit of Gi.

      storageClassName

      Name of the associated Kubernetes storage class that dynamically creates the storage volume.

      The disk type must be csi-disk.

      accessModes

      Read/write mode of the volume.

      Clusters of v1.15 support only non-shared volumes. Set this parameter to ReadWriteOnce.

      driver

      Dependent storage driver for the mounting.

      For EVS disks, set this parameter to disk.csi.everest.io.

      volumeHandle

      Volume ID of the EVS disk.

      To obtain the volume ID, log in to the Cloud Server Console. In the navigation pane, choose Elastic Volume Service > Disks. Click the name of the target EVS disk to go to its details page. On the Summary tab page, click the copy button after ID.

      persistentVolumeReclaimPolicy

      The Delete and Retain policies are supported.

      Delete: When a PVC is deleted, both the PV and the EVS disk are deleted.

      Retain: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the Released state and cannot be bound to the PVC again.

      If high data security is required, you are advised to select Retain to prevent data from being deleted by mistake.

      everest.io/disk-mode

      Device type of the EVS disk. The value can be SCSI.

      spec.claimRef.apiVersion

      The value is fixed at v1.

      spec.claimRef.kind

      The value is fixed at PersistentVolumeClaim.

      spec.claimRef.name

      PVC name. The value is the same as the name of the PVC created in the next step.

      spec.claimRef.namespace

      Namespace of the PVC. The value is the same as the namespace of the PVC created in the next step.

    • vi pvc-evs-example.yaml
      Example YAML file for the PVC:
      apiVersion: v1  
      kind: PersistentVolumeClaim
      metadata:
        labels:
          failure-domain.beta.kubernetes.io/region: cn-north-4
          failure-domain.beta.kubernetes.io/zone: cn-north-4b
        annotations:
          everest.io/disk-volume-type: SAS
          volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner
        name: pvc-evs-example
        namespace: default
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        volumeName:  pv-evs-example
        storageClassName: csi-disk
      Table 2 Key parameters

      Parameter

      Description

      everest.io/disk-volume-type

      EVS disk type. All letters are in uppercase.

      Supported values: High I/O (SAS) and Ultra-high I/O (SSD)

      failure-domain.beta.kubernetes.io/region

      Region where the cluster is located.

      For details about the value of region, see Regions and Endpoints.

      failure-domain.beta.kubernetes.io/zone

      AZ where the EVS volume is created. It must be the same as the AZ planned for the workload.

      For details about the value of zone, see Regions and Endpoints.

      storage

      Requested PVC capacity, in Gi.

      The value must be the same as the storage size of the existing PV.

      volumeName

      Name of the PV.

  4. Create a PV.

    kubectl create -f pv-evs-example.yaml

  5. Create a PVC.

    kubectl create -f pvc-evs-example.yaml

    After the operation is successful, choose Resource Management > Storage to view the created PVC. You can also view the EVS disk by name on the EVS page.

  6. (Optional) Add the metadata associated with the cluster to ensure that the EVS disks associated with the mounted static PV are not deleted when the node or cluster is deleted.

    If you skip this step in this example or when creating a static PV or PVC, ensure that the EVS disk associated with the static PV has been unbound from the node before you delete the node.

    1. Obtain the tenant token. For details, see Obtaining a User Token.
    2. Obtain the EVS access address EVS_ENDPOINT. For details, see Regions and Endpoints.

    3. Add the metadata associated with the cluster to the EVS disk associated with the EVS static PV. For details about the API, see Adding Metadata of an EVS Disk.
      curl -X POST ${EVS_ENDPOINT}/v2/${project_id}/volumes/${volume_id}/metadata --insecure \
          -d '{"metadata":{"cluster_id": "${cluster_id}", "namespace": "${pvc_namespace}"}}' \
          -H 'Accept:application/json' -H 'Content-Type:application/json;charset=utf8' \
          -H 'X-Auth-Token:${TOKEN}'
      Table 3 Key parameters

      Parameter

      Description

      EVS_ENDPOINT

      EVS access address. Set this parameter to the value obtained in 6.b.

      project_id

      Project ID. You can click the login user in the upper right corner of the console page, select My Credentials from the drop-down list, and view the project ID on the Projects tab page.

      volume_id

      ID of the associated EVS disk. Set this parameter to volume_id of the static PV to be created. You can also log in to the EVS console, click the name of the EVS disk to be imported, and obtain the ID from Summary on the disk details page, as shown in Figure 1.

      cluster_id

      ID of the cluster where the EVS PV is to be created. On the CCE console, choose Resource Management > Clusters. Click the name of the cluster to be associated. On the cluster details page, obtain the cluster ID, as shown in Figure 2.

      pvc_namespace

      Namespace to which the PVC is to be bound.

      TOKEN

      User token. Set this parameter to the value obtained in 6.a.

      Figure 1 Obtaining the disk ID
      Figure 2 Obtaining the cluster ID

      For example, run the following command:

      curl -X POST https://evs.cn-north-4.myhuaweicloud.com:443/v2/060576866680d5762f52c0150e726aa7/volumes/69c9619d-174c-4c41-837e-31b892604e14/metadata --insecure \
          -d '{"metadata":{"cluster_id": "71e8277e-80c7-11ea-925c-0255ac100442", "namespace": "default"}}' \
          -H 'Accept:application/json' -H 'Content-Type:application/json;charset=utf8' \
          -H 'X-Auth-Token:MIIPe******IsIm1ldG

      After the request is executed, run the following command to check whether the EVS disk has been associated with the metadata of the cluster:

      curl -X GET ${EVS_ENDPOINT}/v2/${project_id}/volumes/${volume_id}/metadata --insecure \
          -H 'X-Auth-Token:${TOKEN}'

      For example, run the following command:

      curl -X GET https://evs.cn-north-4.myhuaweicloud.com/v2/060576866680d5762f52c0150e726aa7/volumes/69c9619d-174c-4c41-837e-31b892604e14/metadata --insecure \
          -H 'X-Auth-Token:MIIPeAYJ***9t1c31ASaQ=='

      The command output displays the current metadata of the EVS disk.

      {
          "metadata": {
              "namespace": "default",
              "cluster_id": "71e8277e-80c7-11ea-925c-0255ac100442",
              "hw:passthrough": "true"
          }
      }