Updated on 2024-01-17 GMT+08:00

ALM-12033 Slow Disk Fault (For MRS 2.x or Earlier)

Description

For MRS 2.x or earlier:

  • For HDDs, the alarm is triggered when any of the following conditions is met:
    • The system runs the iostat command every 3 seconds, and detects that the svctm value exceeds 1000 ms for 10 consecutive periods within 30 seconds.
    • The system runs the iostat command every 3 seconds, and detects that more than 60% of I/O exceeds 150 ms within 300 seconds.
  • For SSDs, the alarm is triggered when any of the following conditions is met:
    • The system runs the iostat command every 3 seconds, and detects that the svctm value exceeds 1000 ms for 10 consecutive periods within 30 seconds.
    • The system runs the iostat command every 3 seconds, and detects that more than 60% of I/O exceeds 20 ms within 300 seconds.

This alarm is automatically cleared when the preceding conditions have not been met for 15 minutes.

For MRS 1.9.3.10 or later:

  • For HDDs, the alarm is triggered when any of the following conditions is met:
    • By default, the system collects data every 3 seconds. The svctm latency reaches 1000 ms within 30 seconds in at least seven collection periods.
    • By default, the system collects data every 3 seconds. At least 50% of detected svctm take no less than 150 ms within 300 seconds.
  • For SSDs, the alarm is triggered when any of the following conditions is met:
    • By default, the system collects data every 3 seconds. The svctm latency reaches 1000 ms within 30 seconds in at least seven collection periods.
    • By default, the system collects data every 3 seconds. At least 50% of detected svctm take no less than 20 ms within 300 seconds.

The collection period is 3 seconds, and the detection period is 30 or 300 seconds. This alarm is automatically cleared when none of the preceding conditions are met for three consecutive detection periods (30 or 300 seconds).

For details about how to obtain the related parameters, see Related Information.

Attribute

Alarm ID

Alarm Severity

Auto Clear

12033

  • Minor: MRS 1.9.3.10 and later patch versions
  • Major: MRS 2.x and earlier versions

Yes

Parameters

Name

Meaning

Source

Specifies the cluster or system for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

Host Name

Specifies the host for which the alarm is generated.

DiskName

Specifies the disk for which the alarm is generated.

Impact on the System

Service performance deteriorates, service processing capabilities become poor, and services may be unavailable.

Possible Causes

The disk is aged or has bad sectors.

Procedure

Check the disk status.

  1. On the MRS cluster details page, click the alarm from the real-time alarm list. In the Alarm Details area, obtain information about the host for which the alarm is generated and information about the faulty disk.
  2. Check whether the node for which the alarm is generated is in a virtualization environment.

    • If yes, go to 3.
    • If no, go to 6.

  3. Check whether the storage performance provided by the virtualization environment meets the hardware requirements. Then, go to 4.
  4. Log in to the alarm node as user root, run the df -h command, and check whether the command output contains the value of the DiskName field.

    • If yes, go to 6.
    • If no, go to 5.

  5. Run the lsblk command to check whether the mapping between the value of DiskName and the disk has been created.

    • If yes, go to 6.
    • If no, go to 21.

  6. Log in to the alarm node as user root, run the lsscsi | grep "/dev/sd[x]" command to view the disk information, and check whether RAID has been set up.

    In the command, /dev/sd[x] indicates the disk name obtained in 1.

    Example:

    lsscsi | grep "/dev/sda"

    In the command output, if ATA, SATA, or SAS is displayed in the third line, the disk has not been organized into a RAID group. If other information is displayed, RAID has been set up.

    • If yes, go to 11.
    • If no, go to 7.

  7. Run the smartctl -i /dev/sd[x] command to check whether the hardware supports the SMART tool.

    Example:

    smartctl -i /dev/sda

    In the command output, if "SMART support is: Enabled" is displayed, the hardware supports SMART. If "Device does not support SMART" or other information is displayed, the hardware does not support SMART.

    • If yes, go to 8.
    • If no, go to 16.

  8. Run the smartctl -H --all /dev/sd[x] command to check basic SMART information and determine whether the disk is working properly.

    Example:

    smartctl -H --all /dev/sda

    Check the value of SMART overall-health self-assessment test result in the command output. If the value is FAILED, the disk is faulty and needs to be replaced. If the value is PASSED, check the value of Reallocated_Sector_Ct or Elements in grown defect list. If the value is greater than 100, the disk is faulty and needs to be replaced.

    • If yes, go to 9.
    • If no, go to 17.

  9. Run the smartctl -l error -H /dev/sd[x] command to check the Glist of the disk and determine whether the disk is normal.

    Example:

    smartctl -l error -H /dev/sda

    Check the Command/Featrue_name column in the command output. If READ SECTOR(S) or WRITE SECTOR(S) is displayed, the disk has bad sectors. If other errors occur, the disk circuit board is faulty. Both errors indicate that the disk is abnormal and needs to be replaced.

    If "No Errors Logged" is displayed, no error log exists. You can perform step 9 to trigger the disk SMART self-check.

    • If yes, go to 10.
    • If no, go to 17.

  10. Run the smartctl -t long /dev/sd[x] command to trigger the disk SMART self-check. After the command is executed, the time when the self-check is to be completed is displayed. After the self-check is completed, repeat 8 and 9 to check whether the disk is working properly.

    Example:

    smartctl -t long /dev/sda

    • If yes, go to 16.
    • If no, go to 17.

  11. Run the smartctl -d [sat|scsi]+megaraid,[DID] -H --all /dev/sd[x] command to check whether the hardware supports SMART.

    • In the command, [sat|scsi] indicates the disk type. Both types need to be used.
    • [DID] indicates the slot information. Slots 0 to 15 need to be used.

    For example, run the following commands in sequence:

    smartctl -d sat+megaraid,0 -H --all /dev/sda

    smartctl -d sat+megaraid,1 -H --all /dev/sda

    smartctl -d sat+megaraid,2 -H --all /dev/sda

    ...

    Try the command combinations of different disk types and slot information. If "SMART support is: Enabled" is displayed in the command output, the disk supports SMART. Record the parameters of the disk type and slot information when a command is successfully executed. If "SMART support is: Enabled" is not displayed in the command output, the disk does not support SMART.

    • If yes, go to 12.
    • If no, go to 15.

  12. Run the smartctl -d [sat|scsi]+megaraid,[DID] -H --all /dev/sd[x] command recorded in 11 to check basic SMART information and determine whether the disk is normal.

    Example:

    smartctl -d sat+megaraid,2 -H --all /dev/sda

    Check the value of SMART overall-health self-assessment test result in the command output. If the value is FAILED, the disk is faulty and needs to be replaced. If the value is PASSED, check the value of Reallocated_Sector_Ct or Elements in grown defect list. If the value is greater than 100, the disk is faulty and needs to be replaced.

    • If yes, go to 13.
    • If no, go to 17.

  13. Run the smartctl -d [sat|scsi]+megaraid,[DID] -l error -H /dev/sd[x] command to check the Glist of the disk and determine whether the hard disk is working properly.

    Example:

    smartctl -d sat+megaraid,2 -l error -H /dev/sda

    Check the Command/Featrue_name column in the command output. If READ SECTOR(S) or WRITE SECTOR(S) is displayed, the disk has bad sectors. If other errors occur, the disk circuit board is faulty. Both errors indicate that the disk is abnormal and needs to be replaced.

    If "No Errors Logged" is displayed, no error log exists. You can perform step 9 to trigger the disk SMART self-check.

    • If yes, go to 14.
    • If no, go to 17.

  14. Run the smartctl -d [sat|scsi]+megaraid,[DID] -t long /dev/sd[x] command to trigger the disk SMART self-check. After the command is executed, the time when the self-check is to be completed is displayed. After the self-check is completed, repeat 12 and 13 to check whether the disk is working properly.

    Example:

    smartctl -d sat+megaraid,2 -t long /dev/sda

    • If yes, go to 16.
    • If no, go to 17.

  15. If the configured RAID controller card does not support SMART, the disk does not support SMART. In this case, use the check tool provided by the corresponding RAID controller card vendor to rectify the fault. Then go to 16.

    For example, LSI is a MegaCLI tool.

  16. On the alarm details page, click Clear Alarm. Check whether the alarm is reported on the same disk again.

    If the alarm is reported for more than three times, replace the disk.

    • If yes, go to 17.
    • If no, no further action is required.

Replace the disk.

  1. On MRS Manager, choose Alarms.
  2. View the detailed information about the alarm. Check the values of HostName and DiskName in the location information to obtain the information about the faulty disk for which the alarm is reported.
  3. Replace a disk.
  4. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 21.

Collect the fault information.

  1. On MRS Manager, choose System > Export Log.
  2. Contact the O&M engineers and send the collected logs.

Alarm Clearing

This alarm is automatically cleared after the fault is rectified.

Related Information

To obtain the related parameters, perform the following steps:

  • For MRS 2.x or earlier versions:

    Perform the following operations to detect slow disk faults:

    On the Linux platform, run the iostat -x -t 1 command to check whether the I/O is faulty. Specifically, check the svctm value in the red box in the figure below.

    svctm indicates the I/O service time of the disk.

  • For MRS 1.9.3.10 or later patch versions:

    The svctm value can be obtained through the following expression:

    svctm = (tot_ticks_new - tot_ticks_old) / (rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old)

    When the detection period is 30 seconds, if rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old = 0, then svctm = 0.

    When the detection period is 300 seconds and rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old = 0, if tot_ticks_new - tot_ticks_old = 0, then svctm = 0; otherwise, the value of svctm is infinite.

The parameters in the preceding expression can be obtained as follows:

Obtain the parameter values from the data collected via the cat /proc/diskstats command run by the system every 3 seconds. The following shows an example.

In the data collected for the first time, the number in the fourth column is the value of rd_ios_old, the number in the eighth column is the value of wr_ios_old, and the number in the thirteenth column is the value of tot_ticks_old.

In the data collected for the second time, the number in the fourth column is the value of rd_ios_new, the number in the eighth column is the value of wr_ios_new, and the number in the thirteenth column is the value of tot_ticks_new.

In this case, the value of svctm is as follows:

(19571460 - 19569526) / (1101553 + 28747977 - 1101553 - 28744856) = 0.6197