Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-27004 Data Inconsistency Between Active and Standby DBServices
Updated on 2024-11-29 GMT+08:00

ALM-27004 Data Inconsistency Between Active and Standby DBServices

Alarm Description

The system checks the data synchronization status between the active and standby DBServices every 10 seconds. This alarm is generated when the synchronization status cannot be queried for six consecutive times or when the synchronization status is abnormal.

This alarm is cleared when the synchronization is in normal state.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

27004

Critical

Quality of service

FusionInsight Manager

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Local DBService HA

Specifies a local DBService HA.

Peer DBService HA

Specifies a peer DBService HA.

Synchronization of active and standby DBServices

Synchronization rate of active and standby DBService nodes

Impact on the System

When data is not synchronized between the active and standby DBServer and the active instance becomes abnormal, the data may be lost or abnormal.

Possible Causes

  • The network between the active and standby nodes is unstable.
  • The standby DBService is abnormal.
  • The disk space of the standby node is full.
  • The CPU usage of the GaussDB process on the active DBService node is high. (You need to locate the fault based on logs.)

Handling Procedure

Check whether the network between the active and standby nodes is normal.

  1. On FusionInsight Manager, choose Cluster > Services > DBService > Instances to view the service IP address of the standby DBServer instance.
  2. Log in to the active DBService node as user root.
  3. Run the ping heartbeat IP address of the standby DBService command to check whether the standby DBService node is reachable.

    • If yes, go to 6.
    • If no, go to 4.

  4. Contact the network administrator to check whether the network is faulty.

    • If yes, go to 5.
    • If no, go to 6.

  5. Rectify the network fault and check whether the alarm is cleared from the alarm list.

    • If yes, no further action is required.
    • If no, go to 6.

Check whether the status of the standby DBService is normal.

  1. Log in to the standby DBservice node as user root.
  2. Run the su - omm command to switch to user omm.
  3. Go to the ${DBSERVER_HOME}/sbin directory and run the ./status-dbserver.sh command to check whether the GaussDB resource status of the standby DBService is in normal state. In the command output, check whether the following information is displayed in the row where ResName is gaussDB:

    The following is an example:

    10_10_10_231 gaussDB Standby_normal Normal Active_standby
    • If yes, go to 9.
    • If no, go to 16.

Check whether the disk space of the standby node is full.

  1. Log in to the standby DBservice node as user root.
  2. Run the su - omm command to switch to user omm.
  3. Go to the ${DBSERVER_HOME} directory, and run the following commands to obtain the DBService data directory:

    cd ${DBSERVER_HOME}

    source .dbservice_profile

    echo ${DBSERVICE_DATA_DIR}

  4. Run the df -h command to check the system disk partition usage.
  5. Check whether the DBService data directory space is full.

    • If yes, go to 14.
    • If no, go to 16.

  6. Expand the disk capacity of the node.
  7. After the disk capacity is expanded, wait 2 minutes and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 16.

Collect fault information.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
  2. In the Service drop-down list, select DBService of the cluster to be operated, select OS, OS Statistics, and OS Performance in the OMS area, and click OK.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact O&M engineers and provide the collected logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None.