ALM-12007 Process Fault
Alarm Description
The process health check module checks the process status every 5 seconds. This alarm is generated when the process health check module detects that the process connection is faulty for three consecutive times.
This alarm is cleared when the process can be connected.
Alarm Attributes
Alarm ID |
Alarm Severity |
Alarm Type |
Service Type |
Auto Cleared |
---|---|---|---|---|
12007 |
Major |
Quality of service |
FusionInsight Manager |
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster or system for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
|
RoleName |
Specifies the role for which the alarm is generated. |
|
HostName |
Specifies the host for which the alarm is generated. |
|
Additional Information |
Details |
Specifies alarm details. |
Impact on the System
The impact varies depending on the instance that is faulty.
For example, if an HDFS instance is faulty, the impacts are as follows:
- If a DataNode instance is faulty, read and write operations cannot be performed on data blocks stored on the DataNode, which may cause data loss or unavailability. However, data in HDFS is redundant. Therefore, the client can access data from other DataNodes.
- If an HttpFS instance is faulty, the client cannot access files in HDFS over HTTP. However, the client can use other methods (such as shell commands) to access files in HDFS.
- If a JournalNode instance is faulty, namespaces and data logs cannot be stored to disks, which may cause data loss or unavailability. However, HDFS stores backups on other JournalNodes. Therefore, the faulty JournalNode can be recovered and data can be rebalanced.
- If a NameNode deployed in active/standby mode is faulty, an active/standby switchover occurs. If only one NameNode is deployed, the client cannot read or write any HDFS data. On MRS, NameNodes must be deployed in two-node mode.
- If a Router instance is faulty, the client cannot access data on the router. However, the client can use other Routers or directly access data on the backend NameNode.
- If a ZKFC instance is faulty, the NameNode does not continuously and automatically fail over. As a result, data cannot be read from or write to HDFS by the client. In this case, you need to enable automatic failover on other available ZKFC instances to restore the HDFS cluster.
Possible Causes
- The instance process is abnormal.
- The drive space is insufficient.
If a large number of process fault alarms are reported in the same period, files in the installation directory may be deleted by mistake or the permission may be modified.
Handling Procedure
Check whether the instance process is abnormal.
- Log in to FusionInsight Manager, choose O&M > Alarm > Alarms, click in the row that contains the target alarm, and record the service name in Location Information. Click the hostname to view the IP address of the host for which the alarm is generated.
- On the Alarms page, check whether the "ALM-12006 Abnormal NodeAgent Process" alarm is reported.
- Handle the alarm by following the procedure provided in "ALM-12006 Abnormal NodeAgent Process".
- Log in to the host for which the alarm is generated as user root. Check whether the user, user group, and permission of the installation directory where the alarm role is deployed are correct. The correct user, user group, and the permission are omm, ficommon, and 750, respectively.
For example, the NameNode installation directory is ${BIGDATA_HOME}/FusionInsight_Current/1_8_NameNode/etc.
- Run the following commands to set the permission to 750 and User:Group to omm:ficommon:
chmod 750 <folder_name>
chown omm:ficommon <folder_name>
- Wait 5 minutes and check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 7.
- Log in to the active OMS node as user root and run the following command to view the configurations.xml file. In the command, Service name indicates the service name queried in 1.
vi $BIGDATA_HOME/components/current/Service name/configurations.xml
Search for healthMonitor.properties, locate the health check configuration of the instance for which the alarm is generated, and record the interface or script path specified by monitor.info.
View the logs recorded in the interface or script and rectify the fault.
- Wait for 5 minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 9.
Check whether the disk space is insufficient.
- On FusionInsight Manager, check whether the alarm list contains "ALM-12017 Insufficient Disk Capacity".
- Rectify the fault by following the steps provided in "ALM-12017 Insufficient Disk Capacity".
- Wait 5 minutes and check whether the "ALM-12017 Insufficient Disk Capacity" alarm is cleared.
- Wait for 5 minutes and check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 13.
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Based on the service name obtained in 1, select the corresponding component and NodeAgent in the service list, and click OK.
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M engineers and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot