ALM-13000 ZooKeeper Service Unavailable
Description
The system checks the ZooKeeper service status every 60 seconds. This alarm is generated when the ZooKeeper service is unavailable.
This alarm is cleared when the ZooKeeper service recovers.
Attribute
Alarm ID |
Alarm Severity |
Auto Clear |
---|---|---|
13000 |
Critical |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
ZooKeeper cannot provide coordination services for upper layer components and the components that depend on ZooKeeper may not run properly.
Possible Causes
- The DNS is installed on the ZooKeeper node.
- The network is faulty.
- The KrbServer service is abnormal.
- The ZooKeeper instance is abnormal.
- The disk capacity is insufficient.
Procedure
Check the DNS.
- Check whether the DNS is installed on the node where the ZooKeeper instance is located. On the Linux node where the ZooKeeper instance is located, run the cat /etc/resolv.conf command to check whether the file is empty.
- Run the service named status command to check whether the DNS is started.
- Run the service named stop command to stop the DNS service. If "Shutting down name server BIND waiting for named to shut down (28s)" is displayed, the DNS service is stopped successfully. Comment out the content (if any) in /etc/resolv.conf.
- On the O&M > Alarm > Alarms tab, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 5.
Check the network status.
- On the Linux node where the ZooKeeper instance is located, run the ping command to check whether the host names of other nodes where the ZooKeeper instance is located can be pinged successfully.
- Modify the IP addresses in /etc/hosts and add the host name and IP address mapping.
- Run the ping command again to check whether the host names of other nodes where the ZooKeeper instance is located can be pinged successfully.
- On the O&M > Alarm > Alarms tab, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 9.
Check the KrbServer service status (Skip this step if the normal mode is used).
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services.
- Check whether the KrbServer service is normal.
- Perform operations based on "ALM-25500 KrbServer Service Unavailable" and check whether the KrbServer service is recovered.
- On the O&M > Alarm > Alarms tab, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 13.
Check the ZooKeeper service instance status.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > ZooKeeper > quorumpeer.
- Check whether the ZooKeeper instances are normal.
- Select instances whose status is not good, and choose More > Restart Instance.
- Check whether the instance status is good after restart.
- On the O&M > Alarm > Alarms tab, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 18.
Check disk status.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Service > ZooKeeper > quorumpeer, and check the node host information of the ZooKeeper instance.
- On FusionInsight Manager, click Host.
- In the Disk column, check whether the disk space of each node where ZooKeeper instances are located is insufficient (disk usage exceeds 80%).
- Expand disk capacity. For details, see "ALM-12017 Insufficient Disk Capacity".
- On the O&M > Alarm > Alarms tab, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 23.
Collect fault information.
- On the FusionInsight Manager portal, choose O&M > Log > Download.
- Select the following nodes in the required cluster from the Service: (KrbServer logs do not need to be downloaded in normal mode.)
- ZooKeeper
- KrbServer
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected log information.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot