ALM-12006 Node Fault
Description
Controller checks NodeAgent heartbeat messages every 30 seconds. If Controller does not receive heartbeat messages from a NodeAgent, it attempts to restart the NodeAgent process. This alarm is generated if the NodeAgent fails to be restarted for three consecutive times.
This alarm is cleared when Controller can properly receive the status report of the NodeAgent.
Attribute
Alarm ID |
Alarm Severity |
Auto Clear |
---|---|---|
12006 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster or system for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Impact on the System
Services on the node are unavailable.
Possible Causes
The network is disconnected, the hardware is faulty, or the operating system runs slowly.
Procedure
Check whether the network is disconnected, whether the hardware is faulty, or whether the operating system runs slowly.
- In the FusionInsight Manager portal, click O&M > Alarm > Alarms, click in the row where the alarm is located , and click the host name to view the host address for which the alarm is generated.
- Log in to the active management node as user root.
- Run the ping IP address of the faulty host command to check whether the faulty node is reachable.
- Contact the network administrator to check whether the network is faulty.
- Recover the network fault and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 6.
- Contact the system administrator to check whether the node hardware (CPU or memory) is faulty.
- Repair or replace the faulty components and restart the node. Then check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
- If a large number of node faults are reported in the cluster, the floating IP address resource may be abnormal. As a result, the controller cannot detect the agent heartbeat.
Log in to any management node and view the /var/log/Bigdata/omm/oms/ha/scriptlog/floatip.log file to check whether the logs generated one to two minutes before and after the fault occurs are complete.
For example, a complete log is in the following format:
2017-12-09 04:10:51,000 INFO (floatip) Read from ${BIGDATA_HOME}/om-server_8.1.0.1/om/etc/om/routeSetConf.ini,value is : yes 2017-12-09 04:10:51,000 INFO (floatip) check wsNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check omNetExport : eth0 is up. 2017-12-09 04:10:51,000 INFO (floatip) check wsInterface : eth0:oms, wsFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check omInterface : eth0:oms, omFloatIp: XXX.XXX.XXX.XXX. 2017-12-09 04:10:51,000 INFO (floatip) check wsFloatIp : XXX.XXX.XXX.XXX is reachable. 2017-12-09 04:10:52,000 INFO (floatip) check omFloatIp : XXX.XXX.XXX.XXX is reachable.
- Check whether the omNetExport log is printed after the wsNetExport is detected or whether the interval for printing two logs exceeds 10 seconds or longer.
- View the /var/log/message file of the operating system. For Red hat, check whether sssd is frequently restarted; for SUSE, check whether nscd exception exists.
For example, see whether there is the exception Can't contact LDAP server.
sssd restart example:
Feb 7 11:38:16 10-132-190-105 sssd[pam]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[nss]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Shutting down Feb 7 11:38:16 10-132-190-105 sssd: Starting up Feb 7 11:38:16 10-132-190-105 sssd[be[default]]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[nss]: Starting up Feb 7 11:38:16 10-132-190-105 sssd[pam]: Starting up
nscd exception example:
Feb 11 11:44:42 10-120-205-33 nscd: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:43 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server Feb 11 11:44:44 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.92:21780: Can't contact LDAP server
- Check whether the ldapserver node is faulty, for example, the service IP address is unreachable or the network latency is too long. If the fault lasts periodically, locate and eliminate it and run the top command to check whether abnormal software exists.
Collect fault information.
- On the FusionInsight Manager, choose O&M > Log > Download.
- Select the following nodes from the Service and click OK:
- NodeAgent
- Controller
- OS
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected log information.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot