ALM-19006 HBase Replication Sync Failed
Alarm Description
The alarm module checks the HBase DR data synchronization status every 30 seconds. When disaster recovery (DR) data fails to be synchronized to a standby cluster, the alarm is triggered.
When DR data synchronization succeeds, the alarm is cleared.
Alarm Attributes
Alarm ID |
Alarm Severity |
Alarm Type |
Service Type |
Auto Cleared |
---|---|---|---|---|
19006 |
Critical |
Error handling |
HBase |
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
|
RoleName |
Specifies the role for which the alarm is generated. |
|
HostName |
Specifies the host for which the alarm is generated. |
|
Additional Information |
Trigger Condition |
Specifies the threshold for triggering the alarm. |
Impact on the System
HBase data in the cluster cannot be synchronized to the standby cluster. Synchronization data is stacked, causing a large amount of active/standby data inconsistency. As a result, the latest data cannot be read from the standby cluster after an active/standby DR switchover or dual-read. If the alarm persists, the storage space of the primary cluster and ZooKeeper nodes will be stacked, leading to service faults in the primary cluster.
Possible Causes
- The HBase service on the standby cluster is abnormal.
- A network exception occurs.
Handling Procedure
Observe whether the system automatically clears the alarm.
- On the FusionInsight Manager portal of the active cluster, click
- In the alarm list, click the alarm to obtain alarm generation time from Generated of the alarm. Check whether the alarm has existed for five minutes.
- Wait five minutes and check whether the system automatically clears the alarm.
- If yes, no further action is required.
- If no, go to 4.
Check the HBase service status of the standby cluster.
- Log in to the FusionInsight Manager portal of the active cluster, and click
- In the alarm list, click the alarm to obtain HostName from Location.
- Access the node where the HBase client of the active cluster resides as user omm.
If the cluster uses a security mode, perform security authentication first and then access the hbase shell interface as user hbase.
cd /opt/client
source ./bigdata_env
kinit hbaseuser
- Run the status 'replication', 'source' command to check the DR synchronization status of the faulty node.
The DR synchronization status of a node is as follows.
10-10-10-153: SOURCE: PeerID=abc, SizeOfLogQueue=0, ShippedBatches=2, ShippedOps=2, ShippedBytes=320, LogReadInBytes=1636, LogEditsRead=5, LogEditsFiltered=3, SizeOfLogToReplicate=0, TimeForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=0, TimeStampsOfLastShippedOp=Mon Jul 18 09:53:28 CST 2016, Replication Lag=0, FailedReplicationAttempts=0 SOURCE: PeerID=abc1, SizeOfLogQueue=0, ShippedBatches=1, ShippedOps=1, ShippedBytes=160, LogReadInBytes=1636, LogEditsRead=5, LogEditsFiltered=3, SizeOfLogToReplicate=0, TimeForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=16788, TimeStampsOfLastShippedOp=Sat Jul 16 13:19:00 CST 2016, Replication Lag=16788, FailedReplicationAttempts=5
- Obtain PeerID corresponding to a record whose FailedReplicationAttempts value is greater than 0.
In the preceding step, data on the faulty node 10-10-10-153 fails to be synchronized to a standby cluster whose PeerID is abc1.
- Run the list_peers command to find the cluster and the HBase instance corresponding to the PeerID value.
PEER_ID CLUSTER_KEY STATE TABLE_CFS abc1 10.10.10.110,10.10.10.119,10.10.10.133:2181:/hbase2 ENABLED abc 10.10.10.110,10.10.10.119,10.10.10.133:2181:/hbase ENABLED
In the preceding information, /hbase2 indicates that data is synchronized to the HBase2 instance of the standby cluster.
- In the service list of FusionInsight Manager of the standby cluster, check whether the running status of the HBase instance obtained by using 9 is Normal.
- In the alarm list, check whether the ALM-19000 HBase Service Unavailable alarm is generated.
- Follow troubleshooting procedures in ALM-19000 HBase Service Unavailable to rectify the fault.
- Wait for a few minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 14.
Check network connections between RegionServers on active and standby clusters.
- Log in to the FusionInsight Manager portal of the active cluster, and click
- In the alarm list, click the alarm to obtain HostName from Location.
- Use the IP address obtained in 15 to log in to a faulty RegionServer node as user omm.
- Run the ping command to check whether network connections between the faulty RegionServer node and the host where RegionServer of the standby cluster resides are in the normal state.
- Contact the network administrator to restore the network.
- After the network is running properly, check whether the alarm is cleared in the alarm list.
- If yes, no further action is required.
- If no, go to 20.
Collect fault information.
- On the FusionInsight Manager interface of active and standby clusters, choose O&M > Log > Download.
- In the Service drop-down list box, select HBase in the required cluster.
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M engineers and send the collected fault logs.
Alarm Clearance
After the fault is rectified, the system automatically clears this alarm.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot