Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-43013 JDBCServer Process GC Duration Exceeds the Threshold
Updated on 2024-11-29 GMT+08:00

ALM-43013 JDBCServer Process GC Duration Exceeds the Threshold

Alarm Description

The system checks the GC duration of JDBCServer every 60 seconds. A critical alarm is reported when the GC duration exceeds 12 seconds for three consecutive times. A major alarm is reported when the duration exceeds 9.6 seconds for three consecutive times. To change the threshold, choose O&M > Alarm > Thresholds > Name of the desired cluster > Spark > GC Time > JDBCServer Total GC time. This alarm is cleared when the JDBCServer GC duration is shorter than or equal to the threshold.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

43013

Major (default threshold: 9.6 seconds for three consecutive times)

Critical (default threshold: 12 seconds for three consecutive times)

Quality of service

Spark

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Additional Information

Trigger Condition

Specifies the alarm triggering condition.

Impact on the System

If the GC duration exceeds the threshold, the performance of the JDBCServer process deteriorates, and the process can even be unavailable. As a result, Spark JDBC tasks are slow or fail to run.

Possible Causes

The heap memory of the JDBCServer process is overused or inappropriately allocated, causing frequent occurrence of the GC process.

Handling Procedure

Check the GC duration.

  1. On FusionInsight Manager, choose O&M > Alarm > Alarms and select the alarm whose ID is 43013. Check the role name and the IP address of the host where the alarm is generated in Location.
  2. On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark > Instance and click the JDBCServer for which the alarm is generated to enter the Dashboard page. Click the drop-down menu in the Chart area and choose Customize > GC Time > Garbage Collection (GC) Time of JDBCServer from the drop-down list box in the upper right corner and click OK to check whether the GC time is longer than the threshold (default value: 12 seconds).

    • If yes, go to 3.
    • If no, go to 6.

  3. On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark > Configurations, click All Configurations, and select JDBCServer > Default. The default value of SPARK_DRIVER_MEMORY is 4G. If the alarm persists after the parameter value is changed, increase the value by 0.5 times. Double the value if the alarm is reported frequently. In the case of large service volume and high service concurrency, you are advised to add instances.
  4. Restart all JDBCServer instances.
  5. Check whether the alarm is cleared 10 minutes later.

    • If yes, no further action is required.
    • If no, go to 6.

Collect fault Information.

  1. On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
  2. Expand the Service drop-down list, and select Spark for the target cluster.
  3. Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
  4. Contact O&M engineers and provide the collected logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None.