ALM-19024 RPC Requests P99 Latency on RegionServer Exceeds the Threshold
Alarm Description
The system checks P99 latency for RPC requests on each RegionServer instance of the HBase service every 30 seconds. This alarm is generated when P99 latency for RPC requests on a RegionServer exceeds the threshold for 10 consecutive times.
This alarm is cleared when P99 latency for RPC requests on a RegionServer instance is less than or equal to the threshold.
This alarm applies only to MRS 3.3.0 or later.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
19024 |
|
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host for which the alarm is generated. |
Trigger Condition |
Specifies the threshold for triggering the alarm. |
Impact on the System
If RPC requests P99 latency exceeds the threshold, the RegionServer cannot deliver normal service performance externally. For latency-sensitive services, a large number of service read and write requests may time out.
Possible Causes
- RegionServer GC duration is too long.
- The HDFS RPC response is too slow.
- RegionServer request concurrency is too high.
Handling Procedure
- Log in to FusionInsight Manager and choose O&M. In the navigation pane on the left, choose Alarm > Alarms. On the page that is displayed, locate the row containing the alarm whose Alarm ID is 19024, and view the service instance and host name in Location.
Check the GC duration of RegionServer.
- In the alarm list on FusionInsight Manager, check whether the "HBase GC Duration Exceeds the Threshold" alarm is generated for the service instance in 1.
- Rectify the fault by following the handling procedure of "ALM-19007 HBase GC Duration Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 5.
Check HDFS RPC response time.
- In the alarm list on FusionInsight Manager, check whether alarm "Average NameNode RPC Processing Time Exceeds the Threshold" is generated for the HDFS service on which the HBase service depends.
- Rectify the fault by following the handling procedure of "ALM-14021 Average NameNode RPC Processing Time Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
Check the number of concurrent processes on a RegionServer.
- In the alarm list on FusionInsight Manager, check whether the "Handler Usage of RegionServer Exceeds the Threshold" alarm is generated for the service instance in 1.
- Rectify the fault by following the handling procedure of "ALM-19021 Handler Usage of RegionServer Exceeds the Threshold".
- Wait several minutes and check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 11.
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Expand the Service drop-down list, and select HBase for the target cluster.
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot