ALM-43001 Spark2x Service Unavailable
Alarm Description
The system checks the Spark2x service status every 300 seconds. This alarm is generated when the Spark2x service is unavailable.
This alarm is cleared when the Spark2x service recovers.
In MRS 3.3.0-LTS and later versions, the Spark2x component is renamed Spark, and the role names in the component are also changed. For example, JobHistory2x is changed to JobHistory. Refer to the descriptions and operations related to the component name and role names in the document based on your MRS version.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
43001 |
Critical |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
RoleName |
Specifies the role for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
Impact on the System
The Spark tasks submitted by users fail to be executed.
Possible Causes
- The KrbServer service is abnormal.
- The LdapServer service is abnormal.
- ZooKeeper is abnormal.
- HDFS is abnormal.
- Yarn is abnormal.
- The corresponding Hive service is abnormal.
- The Spark2x assembly package is abnormal.
- The NameNode memory is insufficient.
- The memory of the Spark process is insufficient.
Handling Procedure
If the alarm is abnormal Spark2x assembly packet, the Spark packet is abnormal. Wait for about 10 minutes. The alarm is automatically cleared.
Check whether service unavailability alarms exist in services that Spark2x depends on.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Alarm > Alarms.
- Check whether the following alarms exist in the alarm list:
- ALM-25500 KrbServer Service Unavailable
- ALM-25000 LdapServer Service Unavailable
- ALM-13000 ZooKeeper Service Unavailable
- ALM-14000 HDFS Service Unavailable
- ALM-18000 Yarn Service Unavailable
- ALM-16004 Hive Service Unavailable
- Handle the alarms based on the troubleshooting methods provided in the alarm help.
After the alarm is cleared, wait a few minutes and check whether the alarm GuardianService Unavailable is cleared.
- If yes, no further action is required.
- If no, go to 4.
Check whether the NameNode memory is insufficient.
- Check whether the NameNode memory is insufficient.
- Restart the NameNode to release the memory. Then, check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 6.
Check whether the memory of the Spark process is insufficient.
- Check whether the memory of the Spark process is insufficient due to memory-related modifications.
- Ensure that the memory of the Spark process is sufficient or expand the cluster capacity. Then, check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
Collect fault information.
- On FusionInsight Manager, choose O&M > Log > Download.
- In the Service area, select the following nodes of the desired cluster. (Hive is the specific Hive service determined based on ServiceName in the alarm location information).
- KrbServer
- LdapServer
- ZooKeeper
- HDFS
- Yarn
- Hive
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot