Help Center/ MapReduce Service/ User Guide/ MRS Cluster O&M/ MRS Cluster Alarm Handling Reference/ ALM-27004 Data Inconsistency Between Active and Standby DBServices (For MRS 2.x or Earlier)
Updated on 2024-09-23 GMT+08:00

ALM-27004 Data Inconsistency Between Active and Standby DBServices (For MRS 2.x or Earlier)

Description

The system checks the data synchronization status between the active and standby DBServices every 10 seconds. This alarm is generated when the synchronization status cannot be queried for six consecutive times or when the synchronization status is abnormal.

This alarm is cleared when the synchronization is in normal state.

Attribute

Alarm ID

Alarm Severity

Auto Clear

27004

Critical

Yes

Parameters

Parameter

Description

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Local DBService HA Name

Specifies a local DBService HA.

Peer DBService HA Name

Specifies a peer DBService HA.

SYNC_PERSENT

Synchronization percentage.

Impact on the System

When data is not synchronized between the active and standby DBServices, the data may be lost or abnormal if the active instance becomes abnormal.

Possible Causes

  • The network between the active and standby nodes is unstable.
  • The standby DBService is abnormal.
  • The disk space of the standby node is full.

Procedure

  1. Check whether the network between the active and standby nodes is in normal state.

    1. Go to the cluster details page and choose Alarms.
    2. In the alarm list, locate the row that contains the alarm and view the IP address of the standby DBService node in the alarm details.
    3. Log in to the active DBService node.
    4. Run the ping heartbeat IP address of the standby DBService command to check whether the standby DBService node is reachable.
      • If yes, go to 2.a.
      • If no, go to 1.e.
    5. Contact the O&M personnel to check whether the network is faulty.
      • If yes, go to 1.f.
      • If no, go to 2.a.
    6. Rectify the network fault and check whether the alarm is cleared from the alarm list.
      • If yes, no further action is required.
      • If no, go to 2.a.

  2. Check whether the standby DBService is in normal state.

    1. Log in to the standby DBService node.
    2. Run the following commands to switch the user:

      sudo su - root

      su - omm

    3. Go to the ${DBSERVER_HOME}/sbin directory and run the ./status-dbserver.sh command to check whether the GaussDB resource status of the standby DBService is in normal state. In the command output, check whether the following information is displayed in the row where ResName is gaussDB:

      Example:

      10_10_10_231 gaussDB Standby_normal Normal Active_standby
      • If yes, go to 3.a.
      • If no, go to 4.

  3. Check whether the disk space of the standby node is insufficient.

    1. Log in to the standby DBService node.
    2. Run the following commands to switch the user:

      sudo su - root

      su - omm

    3. Go to the ${DBSERVER_HOME} directory, and run the following commands to obtain the DBService data directory:

      cd ${DBSERVER_HOME}

      source .dbservice_profile

      echo ${DBSERVICE_DATA_DIR}

    4. Run the df -h command to check the system disk partition usage.
    5. Check whether the DBService data directory space is full.
      • If yes, go to 3.f.
      • If no, go to 4.
    6. Perform upgrade and expand capacity.
    7. After capacity expansion, wait 2 minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 4.

  4. Collect fault information.

    1. On MRS Manager, choose System > Export Log.
    2. Contact the O&M engineers and send the collected logs.

Reference

None