Help Center/ MapReduce Service/ Troubleshooting/ Using HBase/ HBase Service Unavailable Due to Poor HDFS Performance
Updated on 2022-09-14 GMT+08:00

HBase Service Unavailable Due to Poor HDFS Performance

Symptom

The HBase component intermittently reports alarms indicating that the service is unavailable.

Cause Analysis

HDFS performance is low, causing health check timeout and the alarm is generated accordingly. You can perform the following operations:
  1. View the HMaster log (/var/log/Bigdata/hbase/hm/hbase-omm-xxx.log) and check that system pause, jvm, and other GC-related information is not frequently printed in the log.
  2. Determine whether the fault is caused by poor HDFS performance using either of the following methods:
    1. Run hbase shell to access the HBase shell, and run the list command to check whether it takes a long period of time to list all tables in HBase.
    2. Enable printing of the debug logs of HDFS, and check whether it takes a long period of time to list the content of a large number of directories by running the hadoop fs –ls /XXX/XXX command.
    3. Print the Java stack information about a specified HMaster process.

      su - omm

      jps

      jstack pid

  3. Check the jstack information. The following figure shows that the process is stuck at the DFSClient.listPaths state.
    Figure 1 Exception

Solution

  1. If this alarm is caused by poor HDFS performance, check whether Impala is of an earlier version or JournalNode was incorrectly deployed during the initial deployment (more than three JournalNode nodes are deployed).