ALM-16004 Hive Service Unavailable
Alarm Description
This alarm is generated when the HiveServer service is unavailable. The system checks the HiveServer service status every 60 seconds.
This alarm is cleared when the HiveServer service is normal.
Alarm Attributes
Alarm ID |
Alarm Severity |
Alarm Type |
Service Type |
Auto Cleared |
---|---|---|---|---|
16004 |
Critical |
Quality of service |
Hive |
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
|
RoleName |
Specifies the role for which the alarm is generated. |
|
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
The system cannot provide data loading, query, and extraction services.
Possible Causes
- Hive service unavailability may be related to the faults of the Hive process as well as basic services, such as ZooKeeper, Hadoop distributed file system (HDFS), Yarn, and DBService.
- The ZooKeeper service is abnormal.
- The HDFS service is abnormal.
- The Yarn service is abnormal.
- The DBService service is abnormal.
- The Hive service process is abnormal. If the alarm is caused by Hive process fault, the alarm report has a delay of about 5 minutes.
- The network communication between the Hive and basic services is interrupted.
- The permission on the HDFS temporary directory of Hive is abnormal.
- The local disk space of the Hive node is insufficient.
Handling Procedure
Check the HiveServer/MetaStore process status.
- On the FusionInsight Manager portal, click Cluster > Name of the desired cluster > Services > Hive > Instance. In the Hive instance list, check whether the HiveServer or MetaStore instances are in the Unknown state.
- In the Hive instance list, choose More > Restart Instance to restart the HiveServer/MetaStore process.
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 4.
Check the ZooKeeper service status.
- On the FusionInsight Manager, check whether the alarm list contains Process Fault.
- In the Process Fault, check whether ServiceName is ZooKeeper.
- Rectify the fault by following the steps provided in "ALM-12007 Process Fault".
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 8.
Check the HDFS service status.
- On the FusionInsight Manager, check whether the alarm list contains HDFS Service Unavailable.
- Rectify the fault by following the steps provided in "ALM-14000 HDFS Service Unavailable".
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 11.
Check the Yarn service status.
- In FusionInsight Manager alarm list, check whether Yarn Service Unavailable is generated.
- Rectify the fault. For details, see "ALM-18000 Yarn Service Unavailable".
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 14.
Check the DBService service status.
- In FusionInsight Manager alarm list, check whether DBService Service Unavailable is generated.
- Rectify the fault. For details, see "ALM-27001 DBService Service Unavailable".
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 17.
Check the network connection between the Hive and ZooKeeper, HDFS, Yarn, and DBService.
- On the FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Hive.
- Click Instance.
The HiveServer instance list is displayed.
- Click Host Name in the row of HiveServer.
The active HiveServer host status page is displayed.
- Record the IP address under Basic Information.
- Use the IP address obtained in 20 to log in to the host where the active HiveServer runs as user omm.
- Run the ping command to check whether communication between the host that runs the active HiveServer and the hosts that run the ZooKeeper, HDFS, Yarn, and DBService services is normal. (Obtain the IP addresses of the hosts that run the ZooKeeper, HDFS, Yarn, and DBService services in the same way as that for obtaining the IP address of the active HiveServer.)
- Contact the administrator to restore the network.
- In the alarm list, check whether Hive Service Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 25.
Check the permission on the HDFS temporary directory.
- Log in to the node where the HDFS client is located and run the following command to go to the HDFS client installation directory:
cd Client installation directory
source bigdata_env
kinit user with the supergroup permission (Skip this step for common clusters.)
- Run the following command to check whether the permission on the data warehouse directory is 770:
hdfs dfs -ls /tmp | grep hive-scratch
- Run the following command to restore the default data warehouse permission:
hdfs dfs -chmod 770 /tmp/hive-scratch
- Wait for several minutes and check whether the Hive Service Unavailable alarm is cleared.
- If yes, no further action is required.
- If no, go to 29.
Check whether the local disk space is normal.
- Run the df -h command to check the disk usage. Check whether the disk usage of the /, /srv, /var, and cluster installation directory (/opt by default) exceeds 95%.
- Clear unnecessary information in the corresponding directory to ensure that the available disk space is greater than 80%. Wait for several minutes and check whether the Hive Service Unavailable alarm is cleared.
- If yes, no further action is required.
- If no, go to 31.
Collect fault information.
- On the FusionInsight Manager, choose O&M > Log > Download.
- Select the following nodes in the required cluster from the Service:
- ZooKeeper
- HDFS
- Yarn
- DBService
- Hive
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M engineers and send the collected logs.
Alarm Clearance
After the fault is rectified, the system automatically clears this alarm.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot