Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-46004 Data Inconsistency Between Active and Standby MOTService Nodes
Updated on 2024-11-29 GMT+08:00

ALM-46004 Data Inconsistency Between Active and Standby MOTService Nodes

Alarm Description

The system checks the data synchronization status between the active and standby MOTService nodes every 10 seconds. This alarm is generated when the synchronization status cannot be queried for six consecutive times or the synchronization status is abnormal.

This alarm is cleared when the synchronization status becomes normal.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

46004

Critical

Quality of service

MOTService

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm was generated.

ServiceName

Specifies the service for which the alarm was generated.

RoleName

Specifies the role for which the alarm was generated.

HostName

Specifies the host for which the alarm was generated.

Additional Information

Local MOTService HA Name

Specifies a local MOTService HA.

Peer MOTService HA Name

Specifies a peer MOTService HA.

Impact on the System

If the active instance is abnormal, data will be lost or abnormal.

Possible Causes

  • The network between the active and standby nodes is unstable.
  • The standby MOTService is abnormal.
  • The disk space of the standby node is full.
  • The CPU usage of the GaussDB process on the active MOTService node is high. (You need to locate the fault based on logs.)

Handling Procedure

Check whether the network between the active and standby nodes is normal.

  1. On FusionInsight Manager, choose Cluster > Services > MOTService > Instance. View and record the service IP addresses of MOTServer(Active) and MOTServer(Standby) instances.
  2. Log in to the MOTServer(Active) node as user omm.
  3. Run the following command to check whether the active and standby MOTService nodes are connected:

    ping Service IP address of the MOTServer(Standby) node
    • If yes, go to 6.
    • If no, go to 4.

  4. Contact the network administrator to check whether the network is faulty.

    • If yes, go to 5.
    • If no, go to 6.

  5. Rectify the network fault and check whether the alarm is cleared in the alarm list.

    • If yes, no further action is required.
    • If no, go to 6.

Check whether the standby MOTService is normal.

  1. Log in to the MOTServer(Standby) node as user omm.
  2. Run the following commands to check whether the GaussDB resource status of the standby MOTService is normal:

    cd ${MOTSERVER_HOME}/sbin

    ./status-motserver.sh

    For example, if the following information is displayed in the line where ResName is gaussDB, the service is normal:

    10_10_10_231 gaussDB Standby_normal Normal Active_standby
    • If yes, go to 8.
    • If no, go to 14.

Check whether the disk space of the standby node is full.

  1. Log in to the MOTServer(Standby) node as user omm.
  2. Go to the ${MOTSERVER_HOME} directory and run the following commands to obtain the MOTService data directory:

    cd ${MOTSERVER_HOME}

    source .motservice_profile

    echo ${MOTSERVICE_DATA_DIR}

  3. Run the df -h command to check the system disk partition usage.
  4. Check whether the space of the MOTService data directory is full.

    • If yes, go to 12.
    • If no, go to 14.

  5. Expand the disk capacity of the node.
  6. After the disk capacity is expanded, wait 2 minutes and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 14.

Collect fault information.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
  2. Expand the Service drop-down list, select MOTService for the target cluster, and click OK.
  3. Expand the Hosts drop-down list. In the Select Host dialog box that is displayed, select the hosts to which the role belongs.
  4. Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  5. Contact O&M personnel/Technical support and provide the collected logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None