Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-19024 RPC Requests P99 Latency on RegionServer Exceeds the Threshold
Updated on 2024-11-29 GMT+08:00

ALM-19024 RPC Requests P99 Latency on RegionServer Exceeds the Threshold

Alarm Description

The system checks P99 latency for RPC requests on each RegionServer instance of the HBase service every 30 seconds. This alarm is generated when P99 latency for RPC requests on a RegionServer exceeds the threshold for 10 consecutive times.

This alarm is cleared when P99 latency for RPC requests on a RegionServer instance is less than or equal to the threshold.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

19024

  • Critical: The default threshold is 10 seconds.
  • Major: The default threshold is 5 seconds.

Quality of service

HBase

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Threshold

Specifies the threshold for generating the alarm.

Impact on the System

If RPC requests P99 latency exceeds the threshold, the RegionServer cannot deliver normal service performance externally. For latency-sensitive services, a large number of service read and write requests may time out.

Possible Causes

  • RegionServer GC duration is too long.
  • The HDFS RPC response is too slow.
  • RegionServer request concurrency is too high.

Handling Procedure

  1. Log in to FusionInsight Manager and choose O&M. In the navigation pane on the left, choose Alarm > Alarms. On the page that is displayed, locate the row containing the alarm whose Alarm ID is 19024, and view the service instance and host name in Location.

Check the GC duration of RegionServer.

  1. In the alarm list on FusionInsight Manager, check whether the "HBase GC Duration Exceeds the Threshold" alarm is generated for the service instance in 1.

    • If yes, go to 3.
    • If no, go to 5.

  2. Rectify the fault by following the handling procedure of "ALM-19007 HBase GC Duration Exceeds the Threshold".
  3. Wait several minutes and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 5.

Check HDFS RPC response time.

  1. In the alarm list on FusionInsight Manager, check whether alarm "Average NameNode RPC Processing Time Exceeds the Threshold" is generated for the HDFS service on which the HBase service depends.

    • If yes, go to 6.
    • If no, go to 8.

  2. Rectify the fault by following the handling procedure of "ALM-14021 Average NameNode RPC Processing Time Exceeds the Threshold".
  3. Wait several minutes and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 8.

Check the number of concurrent processes on a RegionServer.

  1. In the alarm list on FusionInsight Manager, check whether the "Handler Usage of RegionServer Exceeds the Threshold" alarm is generated for the service instance in 1.

    • If yes, go to 9.
    • If no, go to 11.

  2. Rectify the fault by following the handling procedure of "ALM-19021 Handler Usage of RegionServer Exceeds the Threshold".
  3. Wait several minutes and check whether the alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 11.

Collect fault information.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
  2. Expand the Service drop-down list, and select HBase for the target cluster.
  3. Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact O&M engineers and provide the collected logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None.