Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

ALM-12033 Slow Disk Fault

Updated on 2024-11-29 GMT+08:00

Alarm Description

  • For HDDs, the alarm is triggered when any of the following conditions is met:
    • By default, the system collects data every 3 seconds. The svctm latency reaches 1000 ms within 30 seconds in at least seven collection periods.
    • By default, the system collects data every 3 seconds. At least 50% of detected svctm take no less than 150 ms within 300 seconds.
  • For SSDs, the alarm is triggered when any of the following conditions is met:
    • By default, the system collects data every 3 seconds. The svctm latency reaches 1000 ms within 30 seconds in at least seven collection periods.
    • By default, the system collects data every 3 seconds. At least 50% of detected svctm take no less than 20 ms within 300 seconds.

The collection period is 3 seconds, and the detection period is 30 or 300 seconds. This alarm is automatically cleared when none of the preceding conditions are met for three consecutive detection periods (30 or 300 seconds).

NOTE:

The svctm value can be obtained as follows:

svctm = (tot_ticks_new - tot_ticks_old) / (rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old)

When the detection period is 30 seconds, if rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old = 0, then svctm = 0.

When the detection period is 300 seconds and rd_ios_new + wr_ios_new - rd_ios_old - wr_ios_old = 0, if tot_ticks_new - tot_ticks_old = 0, then svctm = 0; otherwise, the value of svctm is infinite.

The parameters can be obtained as follows:

The system runs the cat /proc/diskstats command every 3 seconds to collect data.

In the data collected for the first time, the number in the fourth column is the rd_ios_old value, the number in the eighth column is the wr_ios_old value, and the number in the thirteenth column is the tot_ticks_old value.

In the data collected for the second time, the number in the fourth column is the rd_ios_new value, the number in the eighth column is the wr_ios_new value, and the number in the thirteenth column is the tot_ticks_new value.

In this case, the value of svctm is as follows:

(19571460 - 19569526) / (1101553 + 28747977 - 1101553 - 28744856) = 0.6197

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

12033

Minor

Physical resource

FusionInsight Manager

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster or system for which the alarm was generated.

ServiceName

Specifies the service for which the alarm was generated.

RoleName

Specifies the role for which the alarm was generated.

HostName

Specifies the host for which the alarm was generated.

DiskName

Specifies the disk for which the alarm was generated.

Additional Information

Disk ESN

Specified the serial number of the disk for which the alarm was generated.

Impact on the System

  • The system I/O performance deteriorates, which means slow response and low throughput. For example, job submission is slow, page responds slowly, interface response times out, and the system is in error or even crash.
  • System fault: Customer services may be interrupted. The system may break down and the key information stored on the faulty disk may be lost.

Possible Causes

The disk is aged or has bad sectors.

Handling Procedure

Check the disk status.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Alarm > Alarms.
  2. View the detailed information about the alarm. Check the values of HostName and DiskName in the location information to obtain the information about the faulty disk for which the alarm is generated.
  3. Check whether the node for which the alarm is generated is in a virtualization environment.

    • If yes, go to 4.
    • If no, go to 7.

  4. Check whether the storage performance provided by the virtualization environment meets the hardware requirements. Then, go to 5.
  5. Log in to the alarm node as user root, run the df -h command, and check whether the command output contains the value of the DiskName field.

    • If yes, go to 7.
    • If no, go to 6.

  6. Run the lsblk command to check whether the mapping between the value of DiskName and the disk has been created.

    • If yes, go to 7. .
    • If no, go to 25.

  7. Log in to the alarm node as user root, run the lsscsi | grep "/dev/sd[x]" command to view the disk information, and check whether RAID has been set up.

    NOTE:

    In the command, /dev/sd[x] indicates the disk name obtained in 2.

    Example:

    lsscsi | grep "/dev/sda"

    In the command output, if ATA, SATA, or SAS is displayed in the third line, the disk has not been organized into a RAID group. If other information is displayed, RAID has been set up.

    • If yes, go to 12.
    • If no, go to 8.

  8. Run the smartctl -i /dev/sd[x] command to check whether the hardware supports the SMART tool.

    Example:

    smartctl -i /dev/sda

    In the command output, if "SMART support is: Enabled" is displayed, the hardware supports SMART. If "Device does not support SMART" or other information is displayed, the hardware does not support SMART.

    • If yes, go to 9.
    • If no, go to 16.

  9. Run the smartctl -H --all /dev/sd[x] command to check basic SMART information and determine whether the disk is working properly.

    Example:

    smartctl -H --all /dev/sda

    Check the value of SMART overall-health self-assessment test result in the command output. If the value is FAILED, the disk is faulty and needs to be replaced. If the value is PASSED, check the value of Reallocated_Sector_Ct or Elements in grown defect list. If the value is greater than 100, the disk is faulty and needs to be replaced.

    • If yes, go to 10.
    • If no, go to 18.

  10. Run the smartctl -l error -H /dev/sd[x] command to check the Glist of the disk and determine whether the disk is normal.

    Example:

    smartctl -l error -H /dev/sda

    Check the Command/Feature_name column in the command output. If READ SECTOR(S) or WRITE SECTOR(S) is displayed, the disk has bad sectors. If other errors occur, the disk circuit board is faulty. Both errors indicate that the disk is abnormal and needs to be replaced.

    If "No Errors Logged" is displayed, no error log exists. You can perform step 9 to trigger the disk SMART self-check.

    • If yes, go to 11.
    • If no, go to 18.

  11. Run the smartctl -t long /dev/sd[x] command to trigger the disk SMART self-check. After the command is executed, the time when the self-check is to be completed is displayed. After the self-check is completed, repeat 9 and 10 to check whether the disk is working properly.

    Example:

    smartctl -t long /dev/sda

    • If yes, go to 17.
    • If no, go to 18.

  12. Run the smartctl -d [sat|scsi]+megaraid,[DID] -H --all /dev/sd[x] command to check whether the hardware supports SMART.

    NOTE:
    • In the command, [sat|scsi] indicates the disk type. Both types need to be used.
    • [DID] indicates the slot information. Slots 0 to 15 need to be used.

    For example, run the following commands in sequence:

    smartctl -d sat+megaraid,0 -H --all /dev/sda

    smartctl -d sat+megaraid,1 -H --all /dev/sda

    smartctl -d sat+megaraid,2 -H --all /dev/sda

    ...

    Try the command combinations of different disk types and slot information. If "SMART support is: Enabled" is displayed in the command output, the disk supports SMART. Record the parameters of the disk type and slot information when a command is successfully executed. If "SMART support is: Enabled" is not displayed in the command output, the disk does not support SMART.

    • If yes, go to 13.
    • If no, go to 16.

  13. Run the smartctl -d [sat|scsi]+megaraid,[DID] -H --all /dev/sd[x] command recorded in 12 to check basic SMART information and determine whether the disk is normal.

    Example:

    smartctl -d sat+megaraid,2 -H --all /dev/sda

    Check the value of SMART overall-health self-assessment test result in the command output. If the value is FAILED, the disk is faulty and needs to be replaced. If the value is PASSED, check the value of Reallocated_Sector_Ct or Elements in grown defect list. If the value is greater than 100, the disk is faulty and needs to be replaced.

    • If yes, go to 14.
    • If no, go to 18.

  14. Run the smartctl -d [sat|scsi]+megaraid,[DID] -l error -H /dev/sd[x] command to check the Glist of the disk and determine whether the hard disk is working properly.

    Example:

    smartctl -d sat+megaraid,2 -l error -H /dev/sda

    Check the Command/Feature_name column in the command output. If READ SECTOR(S) or WRITE SECTOR(S) is displayed, the disk has bad sectors. If other errors occur, the disk circuit board is faulty. Both errors indicate that the disk is abnormal and needs to be replaced.

    If "No Errors Logged" is displayed, no error log exists. You can trigger the disk SMART self-check.

    • If yes, go to 15.
    • If no, go to 18.

  15. Run the smartctl -d [sat|scsi]+megaraid,[DID] -t long /dev/sd[x] command to trigger the disk SMART self-check. After the command is executed, the time when the self-check is to be completed is displayed. After the self-check is completed, repeat 13 and 14 to check whether the disk is working properly.

    Example:

    smartctl -d sat+megaraid,2 -t long /dev/sda

    • If yes, go to 17.
    • If no, go to 18.

  16. If the configured RAID controller card does not support SMART, the disk does not support SMART. In this case, use the check tool provided by the corresponding RAID controller card vendor to rectify the fault. Then go to 17.

    For example, LSI is a MegaCLI tool.

  17. On FusionInsight Manager, choose O&M > Alarm > Alarms, click Clear in the Operation column of the alarm, and check whether the alarm is reported on the same disk again.

    If the alarm is reported for three times, replace the disk.

    • If yes, go to 18.
    • If no, no further action is required.

Replace the disk.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Alarm > Alarms.
  2. View the detailed information about the alarm. Check the values of HostName and DiskName in the location information to obtain the information about the faulty disk for which the alarm is reported.
  3. Check whether the host for which the alarm is generated is the active OMS node or the active node of the instance in active/standby mode.

    • If yes, go to 21.
    • If no, go to 23.

  4. Log in to the node for which the alarm is generated as the root user and run the following command to check the mount point of the faulty disk:

    df -h | grep "Name of the faulty disk"

    Check whether the mount point partition of the faulty disk is the cluster software installation directory (${BIGDATA_HOME}) or data disk directory (${BIGDATA_DATA_HOME} by default).
    • If yes, go to 22.
    • If no, go to 23.

  5. Trigger an active/standby switchover to rectify the fault.

    • Active OMS node

      If O&M operations cannot be performed due to slow disk faults, such as system freezing, delayed page refreshing, or slow API response, and the alarm is generated for the active OMS node, perform the following operations to trigger an active/standby switchover to restore services:

      1. Log in to the active OMS node as user omm.
      2. Run the following command to perform an active/standby switchover:
        • For the IPv4 network: ${OMS_RUN_PATH}/workspace/ha/module/hacom/tools/ha_client_tool --ip=127.0.0.1 --port=20013 --switchover --name=product
        • For the IPv6 network: ${OMS_RUN_PATH}/workspace/ha/module/hacom/tools/ha_client_tool --ip=::1 --port=20013 --switchover --name=product
      3. After the active/standby switchover is successful, the system recovers. Perform 23 to replace the faulty disk.
    • Active node of an active/standby instance

      If the alarm is generated for the active node of an instance in active/standby mode and the slow disk fault affects the running of the instance, trigger an active/standby switchover on FusionInsight Manager to restore services.

      1. Log in to FusionInsight Manager and choose Cluster > Services > Name of the desired service.
      2. On the service details page, expand the More drop-down list and select Perform xxx Switchover.
      3. In the displayed dialog box, enter the password of the current login user and click OK.
      4. In the displayed dialog box, click OK to perform active/standby switchover.
      5. After the active/standby switchover is successful, the system recovers. Perform 23 to replace the faulty disk.

  6. Replace the disk.
  7. Check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 25.

Collect the fault information.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
  2. Select OMS for Service and click OK.
  3. Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact O&M engineers and provide the collected logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback