Updated on 2024-03-18 GMT+08:00

ALM-12006 Node Fault

Description

Controller checks NodeAgent heartbeat messages every 30 seconds. If Controller does not receive heartbeat messages from a NodeAgent, it attempts to restart the NodeAgent process. This alarm is generated if the NodeAgent fails to be restarted for three consecutive times.

This alarm is cleared when Controller can properly receive the status report of the NodeAgent.

Attribute

Alarm ID

Alarm Severity

Auto Clear

12006

Major

Yes

Parameters

Name

Meaning

Source

Specifies the cluster or system for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

Services on the node are unavailable.

Possible Causes

The network is disconnected, the hardware is faulty, or the operating system runs slowly.

Procedure

Check whether the network is disconnected, whether the hardware is faulty, or whether the operating system runs slowly.

  1. In the FusionInsight Manager portal, click O&M > Alarm > Alarms, click in the row where the alarm is located , and click the host name to view the host address for which the alarm is generated.
  2. Log in to the active management node as user root.
  3. Run the ping IP address of the faulty host command to check whether the faulty node is reachable.

    • If yes, go to 12.
    • If no, go to 4.

  4. Contact the network administrator to check whether the network is faulty.

    • If yes, go to 5.
    • If no, go to 6.

  5. Recover the network fault and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 6.

  6. Contact the system administrator to check whether the node hardware (CPU or memory) is faulty.

    • If yes, go to 7.
    • If no, go to 12.

  7. Repair or replace the faulty components and restart the node. Then check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

  1. If a large number of node faults are reported in the cluster, the floating IP address resource may be abnormal. As a result, the controller cannot detect the agent heartbeat.

    Log in to any management node and view the /var/log/Bigdata/omm/oms/ha/scriptlog/floatip.log file to check whether the logs generated one to two minutes before and after the fault occurs are complete.

    For example, a complete log is in the following format:

    2017-12-09 04:10:51,000 INFO (floatip) Read from ${BIGDATA_HOME}/om-server_8.1.0.1/om/etc/om/routeSetConf.ini,value is : yes
    2017-12-09 04:10:51,000 INFO (floatip) check wsNetExport : eth0 is up.
    2017-12-09 04:10:51,000 INFO (floatip) check omNetExport : eth0 is up.
    2017-12-09 04:10:51,000 INFO (floatip) check wsInterface : eth0:oms, wsFloatIp: XXX.XXX.XXX.XXX.
    2017-12-09 04:10:51,000 INFO (floatip) check omInterface : eth0:oms, omFloatIp: XXX.XXX.XXX.XXX.
    2017-12-09 04:10:51,000 INFO (floatip) check  wsFloatIp : XXX.XXX.XXX.XXX is reachable.
    2017-12-09 04:10:52,000 INFO (floatip) check  omFloatIp : XXX.XXX.XXX.XXX is reachable.
    • If yes, go to 12.
    • If no, go to 9.

  2. Check whether the omNetExport log is printed after the wsNetExport is detected or whether the interval for printing two logs exceeds 10 seconds or longer.

    • If yes, go to 10.
    • If no, go to 12.

  3. View the /var/log/message file of the operating system. For Red hat, check whether sssd is frequently restarted; for SUSE, check whether nscd exception exists.

    For example, see whether there is the exception Can't contact LDAP server.

    sssd restart example:

    Feb  7 11:38:16 10-132-190-105 sssd[pam]: Shutting down
    Feb  7 11:38:16 10-132-190-105 sssd[nss]: Shutting down
    Feb  7 11:38:16 10-132-190-105 sssd[nss]: Shutting down
    Feb  7 11:38:16 10-132-190-105 sssd[be[default]]: Shutting down
    Feb  7 11:38:16 10-132-190-105 sssd: Starting up
    Feb  7 11:38:16 10-132-190-105 sssd[be[default]]: Starting up
    Feb  7 11:38:16 10-132-190-105 sssd[nss]: Starting up
    Feb  7 11:38:16 10-132-190-105 sssd[pam]: Starting up

    nscd exception example:

    Feb 11 11:44:42 10-120-205-33 nscd: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server
    Feb 11 11:44:43 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.55:21780: Can't contact LDAP server
    Feb 11 11:44:44 10-120-205-33 ntpq: nss_ldap: failed to bind to LDAP server ldaps://10.120.205.92:21780: Can't contact LDAP server
    • If yes, go to 11.
    • If no, go to 12.

  4. Check whether the ldapserver node is faulty, for example, the service IP address is unreachable or the network latency is too long. If the fault lasts periodically, locate and eliminate it and run the top command to check whether abnormal software exists.

Collect fault information.

  1. On the FusionInsight Manager, choose O&M > Log > Download.
  2. Select the following nodes from the Service and click OK:

    • NodeAgent
    • Controller
    • OS

  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact the O&M personnel and send the collected log information.

Alarm Clearing

After the fault is rectified, the system automatically clears this alarm.

Related Information

None