ALM-12006 Node Fault
Alarm Description
Controller checks the NodeAgent heartbeat every 30 seconds. If Controller does not receive heartbeat messages from a NodeAgent, it attempts to restart the NodeAgent process. This alarm is generated if the NodeAgent fails to be restarted for three consecutive times.
This alarm is cleared when Controller can properly receive the status report of the NodeAgent.

In MRS 3.3.0 and later versions, the alarm name is changed to NodeAgent Process Is Abnormal.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
12006 |
Major |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster or system for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
RoleName |
Specifies the role for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
Impact on the System
The NodeAgent process is abnormal and cannot report heartbeat messages to the platform. If the fault is caused by network or hardware faults or SSH mutual trust, component services may be abnormal.
Possible Causes
- The network is disconnected, the hardware is faulty, or the operating system runs slowly.
- The memory of the NodeAgent process is insufficient.
- The NodeAgent process is faulty.
Handling Procedure
Check whether the network is disconnected, whether the hardware is faulty, or whether the operating system runs commands slowly.
- On FusionInsight Manager, choose O&M > Alarm > Alarms. On the page that is displayed, click
in the row containing the alarm, click the host name, and view address information about the host for which the alarm is generated.
- Log in to the active management node as user root.
If the faulty node is the active management node and fails login, the network of the active management node may be faulty. In this case, go to Step 4.
- Run the following command to check whether the faulty node is reachable:
ping IP address of the faulty host
- Contact the network administrator to check whether the network is faulty.
- Rectify the network fault and check whether the alarm is cleared from the alarm list.
- If yes, no further action is required.
- If no, go to Step 6.
- Contact the hardware administrator to check whether the hardware (CPU or memory) of the node is faulty.
- Repair or replace faulty components and restart the node. Check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to Step 8.
- If a large number of node faults are reported in the cluster, the floating IP addresses may be abnormal. As a result, Controller cannot detect the NodeAgent heartbeat.
Log in to any management node in the cluster as user root and view the /var/log/Bigdata/omm/oms/ha/scriptlog/floatip.log file to check whether the logs generated one to two minutes before and after the faults occur are complete.
cat /var/log/Bigdata/omm/oms/ha/scriptlog/floatip.log
For example, a complete log is in the following format:
2017-12-09 04:10:51,000 INFO (floatip) Read from ${BIGDATA_HOME}/om-server_*/om/etc/om/routeSetConf.ini,value is : yes 2017-12-09 04:10:51,000 INFO (floatip) check wsNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check omNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check wsInterface : eRth0:oms, wsFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check omInterface : eth0:oms, omFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check wsFloatIp : XXX.XXX.XXX.XXX is reachable. 2017-12-09 04:10:52,000 INFO (floatip) check omFloatIp : XXX.XXX.XXX.XXX is reachable.
- Check whether the omNetExport log is printed after the wsNetExport is detected or whether the interval for printing two logs exceeds 10 seconds or longer.
- Check the /var/log/message file of the OS to determine whether sssd frequently restarts or nscd exception information is displayed when the fault occurs.
cat /var/log/message
Example of sssd restart information:
Feb 7 11:38:16 10-132-190-105 sssd[pam]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd: Starting up Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[nss]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[pam]: Starting up
Example of the nscd exception information:
Feb 11 11:44:42 10-120-205-33 nscd: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:43 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:44 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.92:21780: Can't contact LDAP server
- Check whether the LdapServer node is faulty, for example, the service IP address is unreachable or the network latency is too long. If the fault occurs periodically, troubleshoot the fault when it occurs. Run the top command to check whether there is any software errors.
To obtain information about the LdapServer node, log in to Manager (a cluster management tool), choose Clusters > Services > LdapServer, and click the Instances tab.
Check whether the memory of the NodeAgent process is insufficient.
- Log in to the faulty node as user root and run the following command to view the NodeAgent process logs:
vi /var/log/Bigdata/nodeagent/scriptlog/agent_gc.log.*.current
- Check whether the log file contains an error indicating that the metaspace size or heap memory size is insufficient.
Generally, the keyword for insufficient metaspace is java.lang.OutOfMemoryError: Metaspace, and the keyword for insufficient heap memory is java.lang.OutOfMemoryError: Java heap space.
The following is an example log containing an error:
java.lang.OutOfMemoryError: Metaspace at java.lang.ClassLoader.defineClass1(Native Method) ...
- Switch to user omm. Edit the corresponding file based on the cluster version, increase the values of nodeagent.Xms (NodeAgent initial heap memory) and nodeagent.Xmx (maximum heap memory), and save the modification.
su - omm
The path of the file containing the parameters is as follows:
- For MRS clusters earlier than version 3.2.1:
vi /opt/Bigdata/om-agent/nodeagent/bin/nodeagent_ctl.sh
- For MRS clusters of version 3.2.1 or later:
vi $NODE_AGENT_HOME/etc/agent/nodeagent.properties
For example, change the value of nodeagent.Xms from 1024m to 2048m.
- For MRS clusters earlier than version 3.2.1:
- Restart NodeAgent:
Command for stopping NodeAgent:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/stop-agent.sh
Command for starting NodeAgent:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/start-agent.sh
- Wait a moment and check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to Step 17.
Check whether the NodeAgent process is faulty.
- Log in to the faulty node as user root and run the following command:
Command for switching to user omm:
su - omm
Command for viewing the NodeAgent process:
ps -ef | grep "Dprocess.name=nodeagent" | grep -v grep
- Check whether the process query command output is empty.
- View the NodeAgent startup and run logs to locate the fault. After the fault is rectified, go to Step 20.
- NodeAgent run log:
/var/log/Bigdata/nodeagent/agentlog/agent.log
- NodeAgent startup log:
/var/log/Bigdata/nodeagent/scriptlog/nodeagent_ctl.log
- NodeAgent run log:
- Restart the NodeAgent service:
Command for stopping NodeAgent:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/stop-agent.sh
Command for starting NodeAgent:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/start-agent.sh
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Select the following nodes from Services and click OK.
- NodeAgent
- Controller
- OS
- Click
in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot