ALM-43017 JDBCServer Process Full GC Times Exceeds the Threshold
Alarm Description
The system checks the number of JDBCServer Full GC times every 60 seconds. A critical alarm is reported when the number exceeds 12 for three consecutive times. A major alarm is reported when the number exceeds 12 x 0.8 (rounded down) for three consecutive times. You can change the threshold by choosing O&M > Alarm > Thresholds > Name of the desired cluster > Spark > GC Number > Full GC Number of JDBCServer. This alarm is cleared when the Full GC times of the JDBCServer process is less than or equal to the threshold. This alarm is cleared when the Full GC times of the JDBCServer process is less than or equal to the threshold.
Alarm Attributes
Alarm ID |
Alarm Severity |
Alarm Type |
Service Type |
Auto Cleared |
---|---|---|---|---|
43017 |
Major (default threshold: 9 for three consecutive times) Critical (default threshold: 12 for three consecutive times) |
Quality of service |
Spark |
Yes |
Alarm Parameters
Type |
Parameter |
Description |
---|---|---|
Location Information |
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
|
RoleName |
Specifies the role for which the alarm is generated. |
|
HostName |
Specifies the host for which the alarm is generated. |
|
Additional Information |
Trigger Condition |
Specifies the alarm triggering condition. |
Impact on the System
If the full GC times exceeds the threshold, the performance of the JDBCServer process deteriorates, and the process can even be unavailable. As a result, Spark JDBC tasks are slow or fail to run.
Possible Causes
The heap memory of the JDBCServer process is overused or inappropriately allocated, causing frequent occurrence of the full GC process.
Handling Procedure
Check the Full GC times.
- On FusionInsight Manager, choose O&M > Alarm > Alarms and select the alarm whose ID is 43017. Check the role name and the IP address of the host where the alarm is generated in Location.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark > Instance and click the JDBCServer for which the alarm is generated to enter the Dashboard page. Click the drop-down menu in the chart area and choose Customize > Full GC Number of JDBCServer from the drop-down list box in the upper right corner and click OK to check whether the full GC times is larger than the threshold (default value: 12).
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark > Configurations > All Configurations > JDBCServer > Performance. The default value of the SPARK_DRIVER_MEMORY is 4 GB. You can increase the value by 0.5 times if this alarm is generated occasionally. Double the value if the alarm is reported frequently. In the case of large service volume and high service concurrency, you are advised to add instances.
- Restart all JDBCServer instances.
- Check whether the alarm is cleared 10 minutes later.
- If yes, no further action is required.
- If no, go to 6.
Collect fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Expand the Service drop-down list, and select Spark for the target cluster.
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M engineers and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot