Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-43200 Elasticsearch Service Unavailable
Updated on 2024-11-29 GMT+08:00

ALM-43200 Elasticsearch Service Unavailable

Alarm Description

The system checks the Elasticsearch service availability every 60 seconds. This alarm is generated when the system detects that the Elasticsearch service is unavailable. This alarm is cleared when the Elasticsearch service recovers.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

43200

Critical

Quality of service

Elasticsearch

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

The Elasticsearch service is unavailable, and index data cannot be read or written.

Possible Causes

  • The network connection is abnormal.
  • The component service that Elasticsearch depends on is not available.
  • The EsMaster instance is abnormal.

Handling Procedure

Check whether the network is normal.

  1. Click Cluster > Name of the desired cluster > Services > Elasticsearch > Instance on the FusionInsight Manager to view the service plane IP address of the EsMaster instance.
  2. Log in to the server where any EsMaster instance resides as user root.
  1. Run the ping IP address of other EsMaster instance command to check whether the servers of other EsMaster instances are reachable.

    • If yes, go to 6.
    • If no, go to 4.

  2. Contact the system administrator to rectify network faults.
  1. Check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 6.

Check Server that Elasticsearch depends on is normal.

  1. Choose Cluster > Name of the desired cluster > Services > ZooKeeper to check whether the health of the ZooKeeper service is normal. And check if can connect to ZooKeeper service. Specific operations can refer to the ZooKeeper operating documentation. And if the cluster is in the security mode, check the KrbServer running state is normal.

    • If yes, go to 8.
    • If no, please repair the failed service to make sure the service is normal.

  2. Check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 8.

Check whether EsMaster instances are running properly.

  1. Choose Cluster > Name of the desired cluster > Services > Elasticsearch > Instance to check whether EsMaster instances are healthy.

    • If yes, no further action is required.
    • If no, go to 9.

  2. Locate the EsMaster instance whose Running Status is not Normal and choose More > Restart Instance to restart instance.

    You need to enter the administrator password for FusionInsight Manager to restart an instance.

  1. Check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 11.

Collect fault information.

  1. On the FusionInsight Manager, choose O&M > Log > Download.
  2. Select Elasticsearch in the required cluster from the Service list.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 1 hour ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M engineers and send the collected logs.

Alarm Clearance

After the fault is rectified, the system automatically clears this alarm.

Related Information

None.