Updated on 2023-07-11 GMT+08:00

ALM-16004 Hive Service Unavailable

Description

This alarm is generated when the HiveServer service is unavailable. The system checks the HiveServer service status every 60 seconds.

This alarm is cleared when the HiveServer service is normal.

Attribute

Alarm ID

Alarm Severity

Automatically Cleared

16004

Critical

Yes

Parameters

Name

Meaning

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

The system cannot provide data loading, query, and extraction services.

Possible Causes

  • Hive service unavailability may be related to the faults of the Hive process as well as basic services, such as ZooKeeper, Hadoop distributed file system (HDFS), Yarn, and DBService.
    • The ZooKeeper service is abnormal.
    • The HDFS service is abnormal.
    • The Yarn service is abnormal.
    • The DBService service is abnormal.
    • The Hive service process is abnormal. If the alarm is caused by Hive process fault, the alarm report has a delay of about 5 minutes.
  • The network communication between the Hive and basic services is interrupted.
  • The permission on the HDFS temporary directory of Hive is abnormal.
  • The local disk space of the Hive node is insufficient.

Procedure

Check the HiveServer/MetaStore process status.

  1. On the FusionInsight Manager portal, click Cluster > Name of the desired cluster > Services > Hive > Instance. In the Hive instance list, check whether the HiveServer or MetaStore instances are in the Unknown state.

    • If yes, go to 2.
    • If no, go to 4.

  2. In the Hive instance list, choose More > Restart Instance to restart the HiveServer/MetaStore process.
  3. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 4.

Check the ZooKeeper service status.

  1. On the FusionInsight Manager, check whether the alarm list contains Process Fault.

    • If yes, go to 5.
    • If no, go to 8.

  2. In the Process Fault, check whether ServiceName is ZooKeeper.

    • If yes, go to 6.
    • If no, go to 8.

  3. Rectify the fault by following the steps provided in "ALM-12007 Process Fault".
  4. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

Check the HDFS service status.

  1. On the FusionInsight Manager, check whether the alarm list contains HDFS Service Unavailable.

    • If yes, go to 9.
    • If no, go to 11.

  2. Rectify the fault by following the steps provided in "ALM-14000 HDFS Service Unavailable".
  3. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 11.

Check the Yarn service status.

  1. In FusionInsight Manager alarm list, check whether Yarn Service Unavailable is generated.

    • If yes, go to 12.
    • If no, go to 14.

  2. Rectify the fault. For details, see "ALM-18000 Yarn Service Unavailable".
  3. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 14.

Check the DBService service status.

  1. In FusionInsight Manager alarm list, check whether DBService Service Unavailable is generated.

    • If yes, go to 15.
    • If no, go to 17.

  2. Rectify the fault. For details, see "ALM-27001 DBService Service Unavailable".
  3. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 17.

Check the network connection between the Hive and ZooKeeper, HDFS, Yarn, and DBService.

  1. On the FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Hive.
  2. Click Instance.

    The HiveServer instance list is displayed.

  3. Click Host Name in the row of HiveServer.

    The active HiveServer host status page is displayed.

  4. Record the IP address under Basic Information.
  5. Use the IP address obtained in 20 to log in to the host where the active HiveServer runs as user omm.
  1. Run the ping command to check whether communication between the host that runs the active HiveServer and the hosts that run the ZooKeeper, HDFS, Yarn, and DBService services is normal. (Obtain the IP addresses of the hosts that run the ZooKeeper, HDFS, Yarn, and DBService services in the same way as that for obtaining the IP address of the active HiveServer.)

    • If yes, go to 31.
    • If no, go to 23.

  2. Contact the administrator to restore the network.
  3. In the alarm list, check whether Hive Service Unavailable is cleared.

    • If yes, no further action is required.
    • If no, go to 31.

Check the permission on the HDFS temporary directory.

  1. Log in to the node where the HDFS client is located and run the following command to go to the HDFS client installation directory:

    cd Client installation directory

    source bigdata_env

    kinit user with the supergroup permission (Skip this step for common clusters.)

  2. Run the following command to check whether the permission on the data warehouse directory is 770:

    hdfs dfs -ls /tmp | grep hive-scratch

    • If yes, go to 29.
    • If no, go to 27.

  3. Run the following command to restore the default data warehouse permission:

    hdfs dfs -chmod 770 /tmp/hive-scratch

  4. Wait for several minutes and check whether the Hive Service Unavailable alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 29.

Check whether the local disk space is normal.

  1. Run the df -h command to check the root directory and check whether the disk usage of the /srv, /var, and /opt directories exceeds 95%.

    • If yes, go to 30.
    • If no, go to 31.

  2. Clear unnecessary information in the corresponding directory to ensure that the available disk space is greater than 80%. Wait for several minutes and check whether the Hive Service Unavailable alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 31.

Collect fault information.

  1. On the FusionInsight Manager, choose O&M > Log > Download.
  2. Select the following nodes in the required cluster from the Service:

    • ZooKeeper
    • HDFS
    • Yarn
    • DBService
    • Hive

  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M personnel and send the collected logs.

Alarm Clearing

After the fault is rectified, the system automatically clears this alarm.

Related Information

None