Updated on 2024-09-23 GMT+08:00

ALM-12007 Process Fault

Description

This alarm is generated when the process health check module detects that the process connection status is Bad for three consecutive times. The process health check module checks the process status every 5 seconds.

This alarm is cleared when the process can be connected.

Attribute

Alarm ID

Alarm Severity

Auto Clear

12007

Major

Yes

Parameters

Name

Meaning

Source

Specifies the cluster or system for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

The impact varies depending on the instance that is faulty.

For example, if an HDFS instance is faulty, the impacts are as follows:

  • If a DataNode instance is faulty, read and write operations cannot be performed on data blocks stored on the DataNode, which may cause data loss or unavailability. However, data in HDFS is redundant. Therefore, the client can access data from other DataNodes.
  • If an HttpFS instance is faulty, the client cannot access files in HDFS over HTTP. However, the client can use other methods (such as shell commands) to access files in HDFS.
  • If a JournalNode instance is faulty, namespaces and data logs cannot be stored to disks, which may cause data loss or unavailability. However, HDFS stores backups on other JournalNodes. Therefore, the faulty JournalNode can be recovered and data can be rebalanced.
  • If a NameNode deployed in active/standby mode is faulty, an active/standby switchover occurs. If only one NameNode is deployed, the client cannot read or write any HDFS data. On MRS, NameNodes must be deployed in two-node mode.
  • If a Router instance is faulty, the client cannot access data on the router. However, the client can use other Routers or directly access data on the backend NameNode.
  • If a ZKFC instance is faulty, the NameNode does not continuously and automatically fail over. As a result, data cannot be read from or write to HDFS by the client. In this case, you need to enable automatic failover on other available ZKFC instances to restore the HDFS cluster.

Possible Causes

  • The instance process is abnormal.
  • The disk space is insufficient.

If a large number of process fault alarms exist in a time segment, files in the installation directory may be deleted mistakenly or permission on the directory may be modified.

Procedure

Check whether the instance process is abnormal.

  1. In the FusionInsight Manager portal, click O&M > Alarm > Alarms, click in the row where the alarm is located , and click the host name to view the host address for which the alarm is generated
  2. On the Alarms page, check whether the ALM-12006 Node Fault is generated.

    • If yes, go to 3.
    • If no, go to 4.

  3. Handle the alarm according to ALM-12006 Node Fault.
  4. Log in to the host for which the alarm is generated as user root. Check whether the installation directory user, user group, and permission of the alarm role are correct. The user, user group, and the permission must be omm:ficommon 750.

    For example, the NameNode installation directory is ${BIGDATA_HOME}/FusionInsight_Current/1_8_NameNode/etc.

    • If yes, go to 6.
    • If no, go to 5.

  5. Run the following command to set the permission to 750 and User:Group to omm:ficommon:

    chmod 750 <folder_name>

    chown omm:ficommon <folder_name>

  6. Wait for 5 minutes. In the alarm list, check whether ALM-12007 Process Fault is cleared.

    • If yes, no further action is required.
    • If no, go to 7.

  7. Log in to the active OMS node as user root and run the following command to view the configurations.xml file. In the preceding command, "Service name" is the service name queried in 1.

    vi $BIGDATA_HOME/components/current/Service name/configurations.xml

    Search for the keyword healthMonitor.properties, find the health check configuration item corresponding to the alarm reporting instance, and record the interface or script path specified by monitor.info, as shown in the following figure.

    Check the logs recorded in the interface or script and rectify the fault.

  8. Wait for 5 minutes. In the alarm list, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 9.

Check whether disk space is sufficient.

  1. On the FusionInsight Manager, check whether the alarm list contains ALM-12017 Insufficient Disk Capacity.

    • If yes, go to 10.
    • If no, go to 13.

  2. Rectify the fault by following the steps provided in ALM-12017 Insufficient Disk Capacity.
  3. Wait for 5 minutes. In the alarm list, check whether ALM-12017 Insufficient Disk Capacity is cleared.

    • If yes, go to 12.
    • If no, go to 13.

  4. Wait for 5 minutes. In the alarm list, check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 13.

Collect fault information.

  1. On the FusionInsight Manager, choose O&M > Log > Download.
  2. According to the service name obtained in 1, select the component and NodeAgent from the Service and click OK.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M personnel and send the collected log information.

Alarm Clearing

After the fault is rectified, the system automatically clears this alarm.

Related Information

None