Updated on 2022-12-08 GMT+08:00

Formatting a Disk

Procedure

  1. Log in to an SAP HANA node.

    Use PuTTY to log in to the NAT server with an elastic IP address bound. Ensure that user root and the key file (.ppk file) are used for authentication. Then, use SSH to switch to the SAP HANA nodes.

  2. Check the disks that have not been formatted.

    Run the following command to query the disks to be formatted:

    fdisk -l

    Determine the disks of the /usr/sap volumes, data volumes, log volumes, shared volumes, and swap volumes according to the disk capacity.

  3. Run the following command to create a disk directory:

    mkdir -p /hana/log /hana/data /hana/shared /hana/backup /usr/sap

  4. Create and enable the swap partition. dev/vdb is used as an example.

    mkswap /dev/vdb

    swapon /dev/vdb

  5. Use LVM to logically create a data volume using two EVS disks. The following uses EVS disks dev/vdb and dev/vdc as an example.

    1. Run the following command to create physical volumes:

      pvcreate /dev/vdb /dev/vdc

    2. Run the following command to create volume groups:

      vgcreate vghana /dev/vdb /dev/vdc

    3. Run the following command to query the available capacity of the volume group:

      vgdisplay vghana

    4. Create logical volumes. The following command uses two EVS disks to create a logical volume.

      lvcreate -n lvhanadata -i 2 -I 256 -L 348G vghana

      The parameters are described as follows:

      • -n: Name of a logical volume
      • -i: Number of logical extensions
      • -I: Stripe size
      • -L: Logical volume size

  6. Format disks and logical volumes. dev/vdd, dev/vde, and dev/vdf are used as examples.

    mkfs.xfs /dev/vdd

    mkfs.xfs /dev/vde

    mkfs.xfs /dev/vdf

    mkfs.xfs /dev/mapper/vghana-lvhanadata

  7. Write disk attaching relationships into the /etc/fstab file.

    1. Run the following command to check the UUID of the disk:

      blkid

    2. Obtain the shared path in 2.g or saphana_02_0075.html#saphana_02_0075__li137231454101 in section Creating an SFS File System. PublicCloudAddress:/share-d6c6d9e2 is used as an example.
    3. Write the attaching relationships between the disk UUID and shared path to the /etc/fstab file. UUID is used as an example here.

      echo "UUID=ba1172ee-39b2-4d28-89b8-282ebabfe8f4 /hana/data xfs defaults 0 0" >>/etc/fstab

      echo "UUID=d21734c9-44c0-45f7-a37d-02232e97fd3b /hana/log xfs defaults 0 0" >>/etc/fstab

      echo "UUID=191b5369-9544-432f-9873-1beb2bd01de5 /hana/shared xfs defaults 0 0" >>/etc/fstab

      echo "UUID=191b5369-9544-432f-9873-1beb2bd01de5 /usr/sap xfs defaults 0 0" >>/etc/fstab

      echo "UUID=1b569544-1225-44c0-4d28-2e97fdeb2bd swap swap defaults 0 0" >> /etc/fstab

      echo "PublicCloudAddress:/share-d6c6d9e2 /hana/backup nfs noatime,nodiratime,rdirplus,vers=3,wsize=1048576,rsize=1048576,noacl,nocto,proto=tcp,async 0 0" >>/etc/fstab

  8. Run the following command to attach all disks:

    mount -a

  9. Check the disk mounting status. The following is an example:

    # df -h 
    Filesystem                            Size  Used  Avail Use% Mounted on 
    devtmpfs                              126G     0  126G    0% /dev
    tmpfs                                 197G   80K  197G    0% /dev/shm
    tmpfs                                 126G   17M  126G    1% /run 
    tmpfs                                 126G     0  126G    0% /sys/fs/cgroup 
    /dev/xvda                              50G  4.4G   43G   10% / 
    /dev/sdd                              254G   93G  162G   37% /hana/shared 
    /dev/mapper/vghana-lvhanadata         254G   67G  188G   27% /hana/data 
    /dev/sde                              164G  6.3G  158G    4% /hana/log 
    /dev/sdf                               50G  267M   50G    1% /usr/sap
    /dev/xvdb                              10G    5G    5G   50% /swap
    PublicCloudAddress:/share-d6c6d9e2    384G     0  384G    0% /hana/backup
    tmpfs                                  26G     0   26G    0% /run/user/1002 
    tmpfs                                  26G     0   26G    0% /run/user/480 
    tmpfs                                  26G   16K   26G    1% /run/user/0