ALM-12006 Node Fault
Alarm Description
Controller checks the NodeAgent heartbeat every 30 seconds. If Controller does not receive heartbeat messages from a NodeAgent, it attempts to restart the NodeAgent process. This alarm is generated if the NodeAgent fails to be restarted for three consecutive times.
This alarm is cleared when Controller can properly receive the status report of the NodeAgent.
In MRS 3.3.0 and later versions, the alarm name is changed to NodeAgent Process Is Abnormal.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
12006 |
Major |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster or system for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
RoleName |
Specifies the role for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
Impact on the System
NodeAgent process is abnormal, heartbeat messages cannot be reported to the platform. If the problem is caused by network faults, hardware faults, or SSH mutual trust, component services cannot be normal.
Possible Causes
- The network is disconnected, the hardware is faulty, or the operating system runs slowly.
- The memory of the NodeAgent process is insufficient.
- The NodeAgent process is faulty.
Handling Procedure
Check whether the network is disconnected, whether the hardware is faulty, or whether the operating system runs commands slowly.
- On FusionInsight Manager, choose O&M > Alarm > Alarms. On the page that is displayed, click in the row containing the alarm, click the host name, and view the IP address of the host for which the alarm is generated.
- Log in to the active management node as user root.
If the faulty node is the active management node and fails login, the network of the active management node may be faulty. In this case, go to 4.
- Run the ping IP address of the faulty host command to check whether the faulty node is reachable.
- Contact the network administrator to check whether the network is faulty.
- Rectify the network fault and check whether the alarm is cleared from the alarm list.
- If yes, no further action is required.
- If no, go to 6.
- Contact the hardware administrator to check whether the hardware (CPU or memory) of the node is faulty.
- Repair or replace faulty components and restart the node. Check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
- If a large number of node faults are reported in the cluster, the floating IP addresses may be abnormal. As a result, Controller cannot detect the NodeAgent heartbeat.
Log in to any management node and view the /var/log/Bigdata/omm/oms/ha/scriptlog/floatip.log log to check whether the logs generated one to two minutes before and after the faults occur are complete.
For example, a complete log is in the following format:
2017-12-09 04:10:51,000 INFO (floatip) Read from ${BIGDATA_HOME}/om-server_*/om/etc/om/routeSetConf.ini,value is : yes 2017-12-09 04:10:51,000 INFO (floatip) check wsNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check omNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check wsInterface : eRth0:oms, wsFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check omInterface : eth0:oms, omFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check wsFloatIp : XXX.XXX.XXX.XXX is reachable. 2017-12-09 04:10:52,000 INFO (floatip) check omFloatIp : XXX.XXX.XXX.XXX is reachable.
- Check whether the omNetExport log is printed after the wsNetExport is detected or whether the interval for printing two logs exceeds 10 seconds or longer.
- View the /var/log/message file of the OS to check whether sssd frequently restarts or nscd exception information is displayed when the fault occurs.
sssd restart example
Feb 7 11:38:16 10-132-190-105 sssd[pam]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd: Starting up Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[nss]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[pam]: Starting up
Example nscd exception information
Feb 11 11:44:42 10-120-205-33 nscd: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:43 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:44 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.92:21780: Can't contact LDAP server
- Check whether the LdapServer node is faulty, for example, the service IP address is unreachable or the network latency is too high. If the fault occurs periodically, locate and eliminate it and run the top command to check whether abnormal software exists.
Check whether the memory of the NodeAgent process is insufficient.
- Log in to the faulty node as user root and run the following command to view the NodeAgent process logs:
vi /var/log/Bigdata/nodeagent/scriptlog/agent_gc.log.*.current
- Check whether the log file contains an error indicating that the metaspace size or heap memory size is insufficient.
- Run the su - omm command to switch to user omm, edit the corresponding file based on the cluster version, increase the values of nodeagent.Xms (initial heap memory) and nodeagent.Xmx (maximum heap memory), and save the modification.
The path of the file containing the parameters is as follows:
- Versions earlier than MRS 3.2.1: /opt/Bigdata/om-agent/nodeagent/bin/nodeagent_ctl.sh
- MRS 3.2.1 or later: $NODE_AGENT_HOME/etc/agent/nodeagent.properties
- Run the following commands to restart the NodeAgent service:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/stop-agent.sh
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/start-agent.sh
- Wait a moment and check whether this alarm is cleared.
- If yes, no further action is required.
- If no, go to 17.
Check whether the NodeAgent process is faulty.
- Log in to the faulty node as user omm and run the following command:
ps -ef | grep "Dprocess.name=nodeagent" | grep -v grep
- Check whether the command output is empty.
- View the NodeAgent startup and run logs to locate the fault. After the fault is rectified, go to 20.
- NodeAgent run logs: /var/log/Bigdata/nodeagent/agentlog/agent.log
- NodeAgent start and stop logs: /var/log/Bigdata/nodeagent/scriptlog/nodeagent_ctl.log
- Run the following commands to restart the NodeAgent service:
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/stop-agent.sh
sh ${BIGDATA_HOME}/om-agent/nodeagent/bin/start-agent.sh
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Select the following nodes from Services and click OK.
- NodeAgent
- Controller
- OS
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot