ALM-43001 Spark2x Service Unavailable
Description
The system checks the Spark2x service status every 300 seconds. This alarm is generated when the Spark2x service is unavailable.
This alarm is cleared when the Spark2x service recovers.
Attribute
Alarm ID |
Alarm Severity |
Auto Clear |
---|---|---|
43001 |
Critical |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
The tasks submitted by users fail to be executed.
Possible Causes
- The KrbServer service is abnormal.
- The LdapServer service is abnormal.
- The ZooKeeper service is abnormal.
- The HDFS service is abnormal.
- The Yarn service is abnormal.
- The corresponding Hive service is abnormal.
- Spark2x assembly packet is abnormal.
Procedure
If the alarm is abnormal Spark2x assembly packet, the Spark packet is abnormal. Wait for about 10 minutes. The alarm is automatically cleared.
Check whether service unavailability alarms exist in services depended by Spark2x.
- On FusionInsight Manager, choose O&M > Alarm > Alarms.
- Check whether the following alarms exist in the alarm list:
- ALM-25500 KrbServer Service Unavailable
- ALM-25000 LdapServer Service Unavailable
- ALM-13000 ZooKeeper Service Unavailable
- ALM-14000 HDFS Service Unavailable
- ALM-18000 Yarn Service Unavailable
- ALM-16004 Hive Service Unavailable
If the multi-instance function is enabled in the cluster and multiple Spark2x service instances are installed, you need to determine the Spark2x service instance where the alarm is generated based on the value of ServiceName in Location. Then you need to check whether the corresponding Hive service is faulty. Spark2x corresponds to Hive, and Spark2x1 corresponds to Hive1. The others follow the same rule.
- Handle the service unavailability alarms based on the troubleshooting methods provided in the alarm help.
After all the service unavailability alarms are cleared, wait a few minutes and check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 4.
Collect fault information.
- On FusionInsight Manager, choose O&M > Log > Download.
- Select the following nodes in the required cluster from the Service (Hive is the specific Hive service determined based on ServiceName in the alarm location information).
- KrbServer
- LdapServer
- ZooKeeper
- HDFS
- Yarn
- Hive
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot