ALM-19030 P99 Latency of RegionServer RPC Request Exceeds the Threshold
Alarm Description
The system checks the P99 latency for responding to RPC requests on each RegionServer instance of the HBase service every 30 seconds. This alarm is generated when P99 latency on a RegionServer instance exceeds the threshold for 10 consecutive times.
This alarm is cleared when the P99 latency on a RegionServer instance is less than or equal to the threshold.
This alarm is generated only for MRS 3.3.1 or later.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
19030 |
|
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
|
RoleName |
Specifies the role for which the alarm was generated. |
|
HostName |
Specifies the host for which the alarm was generated. |
|
Additional Information |
Threshold |
Specifies the threshold for generating the alarm. |
Impact on the System
The RegionServer's capability of providing services for external systems is affected. For latency-sensitive services, a large number of service read and write requests may time out.
Possible Causes
- RegionServer GC duration is too long.
- The HDFS RPC response is too slow.
- The client requests are at scale with high concurrency.
Handling Procedure
- Log in to FusionInsight Manager and choose O&M. In the navigation pane on the left, choose Alarm > Alarms. On the page that is displayed, locate the row containing the alarm whose Alarm ID is 19030, and view the service instance and host name in Location. Click the host name and record the service IP address of the host.
Check the GC duration of the RegionServer.
- In the alarm list on FusionInsight Manager, check whether the "HBase GC Duration Exceeds the Threshold" alarm is generated for the service instance in 1.
- Rectify the fault by following the handling procedure of "ALM-19007 HBase GC Duration Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 5.
Check HDFS RPC response time.
- In the alarm list on FusionInsight Manager, check whether an alarm is generated for the DataNode instance of the HDFS service on which HBase depends, or whether the alarm "Slow Disk Fault", "Disk Unavailable", or "Average NameNode RPC Processing Time Exceeds the Threshold" is generated on the node where the alarm is generated.
- Rectify the fault by following the handling procedure of the DataNode alarms: "ALM-12033 Slow Disk Fault", "ALM-12063 Disk Unavailable", or "ALM-14021 Average NameNode RPC Processing Time Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
- Log in to the node for which the alarm is generated, run the iostat -x 2 command to check the disk I/O. In the command output, check whether the value in the util column of each disk is greater than 90%.
- Choose Cluster > Services > HDFS > Instances, select the DataNode instance of the node for which the alarm is generated, choose More > Stop Instance, enter the password of the current user, and click OK to stop the DataNode instance.
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 11.
Check the number of concurrent processes on a RegionServer.
- In the alarm list on FusionInsight Manager, check whether the "Handler Usage of RegionServer Exceeds the Threshold" alarm is generated for the service instance in 1.
- Rectify the fault by following the handling procedure of "ALM-19021 Handler Usage of RegionServer Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 14.
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Expand the Service drop-down list, and select HBase for the target cluster.
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M engineers and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot