ALM-43200 Elasticsearch Service Unavailable
Alarm Description
The system checks the Elasticsearch service availability every 60 seconds. This alarm is generated when the system detects that the Elasticsearch service is unavailable. This alarm is cleared when the Elasticsearch service recovers.
Alarm Attributes
Alarm ID |
Alarm Severity |
Alarm Type |
Service Type |
Auto Cleared |
---|---|---|---|---|
43200 |
Critical |
Quality of service |
Elasticsearch |
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
|
RoleName |
Specifies the role for which the alarm is generated. |
|
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
The Elasticsearch service is unavailable, and index data cannot be read or written.
Possible Causes
- The network connection is abnormal.
- The component service that Elasticsearch depends on is not available.
- The EsMaster instance is abnormal.
Handling Procedure
Check whether the network is normal.
- Click Cluster > Name of the desired cluster > Services > Elasticsearch > Instance on the FusionInsight Manager to view the service plane IP address of the EsMaster instance.
- Log in to the server where any EsMaster instance resides as user root.
- Run the ping IP address of other EsMaster instance command to check whether the servers of other EsMaster instances are reachable.
- Contact the system administrator to rectify network faults.
- Check whether the alarm is cleared from the alarm list.
- If yes, no further action is required.
- If no, go to 6.
Check Server that Elasticsearch depends on is normal.
- Choose Cluster > Name of the desired cluster > Services > ZooKeeper to check whether the health of the ZooKeeper service is normal. And check if can connect to ZooKeeper service. Specific operations can refer to the ZooKeeper operating documentation. And if the cluster is in the security mode, check the KrbServer running state is normal.
- If yes, go to 8.
- If no, please repair the failed service to make sure the service is normal.
- Check whether the alarm is cleared from the alarm list.
- If yes, no further action is required.
- If no, go to 8.
Check whether EsMaster instances are running properly.
- Choose Cluster > Name of the desired cluster > Services > Elasticsearch > Instance to check whether EsMaster instances are healthy.
- If yes, no further action is required.
- If no, go to 9.
- Locate the EsMaster instance whose Running Status is not Normal and choose More > Restart Instance to restart instance.
You need to enter the administrator password for FusionInsight Manager to restart an instance.
- Check whether the alarm is cleared from the alarm list.
- If yes, no further action is required.
- If no, go to 11.
Collect fault information.
- On the FusionInsight Manager, choose O&M > Log > Download.
- Select Elasticsearch in the required cluster from the Service list.
- Click in the upper right corner, and set Start Date and End Date for log collection to 1 hour ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M engineers and send the collected logs.
Alarm Clearance
After the fault is rectified, the system automatically clears this alarm.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot