Help Center/ MapReduce Service/ User Guide (Ankara Region)/ Alarm Reference/ ALM-45642 RocksDB Continuously Triggers Write Traffic Limiting
Updated on 2024-11-29 GMT+08:00

ALM-45642 RocksDB Continuously Triggers Write Traffic Limiting

Alarm Description

The system checks the RocksDB monitoring data of jobs at the user-specified alarm reporting interval (metrics.reporter.alarm.job.alarm.rocksdb.metrics.duration, 180s by default). This alarm is generated when RocksDB for a job continuously triggers write traffic limiting, that is, the RocksDB write rate is not 0. This alarm is cleared when the RocksDB write rate of the job becomes 0.

The rocksdb.actual-delayed-write-rate parameter specifies the RocksDB write rate of a job. Value 0 indicates that the rate is not limited, and other values indicate traffic limiting.

Alarm Attributes

Alarm ID

Alarm Severity

Alarm Type

Service Type

Auto Cleared

45642

Minor

Quality of service

Flink

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm was generated.

ServiceName

Specifies the service for which the alarm was generated.

ApplicationName

Specifies the name of the application for which the alarm was generated.

JobName

Specifies the job for which the alarm was generated.

UserName

Specifies the username for which the alarm was generated.

Impact on the System

The checkpoint performance of Flink jobs are affected. There is no impact on the FlinkServer.

Possible Causes

When the rate at which Flink jobs write data to RocksDB is not 0, write traffic limiting is triggered. The possible causes are as follows:

  • There are too many MemTables. As a result, write traffic is limited or write stops, and ALM-45643 MemTable Size of RocksDB Continuously Exceeds the Threshold is generated.
  • The size of SST files at level 0 is too large, and ALM-45644 Number of SST Files at Level 0 of RocksDB Continuously Exceeds the Threshold is generated.
  • The estimated compaction size exceeds the threshold, and ALM-45647 Estimated Pending Compaction Size of RocksDB Continuously Exceeds the Threshold is generated.

Handling Procedure

Check whether write traffic limiting or write stop is caused due to too many MemTables.

  1. On FusionInsight Manager, choose O&M > Alarm > Alarms.
  2. In the alarm list, check whether ALM-45643 MemTable Size of RocksDB Continuously Exceeds the Threshold exists.

    • If yes, go to 3.
    • If no, go to 5.

  3. Handle the alarm by following the instructions provided in section ALM-45643 MemTable Size of RocksDB Continuously Exceeds the Threshold.
  4. After ALM-45643 is cleared, wait a few minutes and check whether this alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 5.

Check whether write traffic limiting or write stop is caused due to too many SST files at level 0.

  1. On FusionInsight Manager, choose O&M > Alarm > Alarms.
  2. In the alarm list, check whether ALM-45644 Number of SST Files at Level 0 of RocksDB Continuously Exceeds the Threshold exists.

    • If yes, go to 7.
    • If no, go to 9.

  3. Handle the alarm by following the instructions provided in section ALM-45644 Number of SST Files at Level 0 of RocksDB Continuously Exceeds the Threshold.
  4. After ALM-45644 is cleared, wait a few minutes and check whether this alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 9.

Check whether write traffic limiting or write stop is caused because the estimated compaction size exceeds the threshold.

  1. In the alarm list, check whether ALM-45647 Estimated Pending Compaction Size of RocksDB Continuously Exceeds the Threshold exists.

    • If yes, go to 10.
    • If no, go to 12.

  2. Handle the alarm by following the instructions provided in section ALM-45647 Estimated Pending Compaction Size of RocksDB Continuously Exceeds the Threshold.
  3. After ALM-45647 is cleared, wait a few minutes and check whether this alarm is cleared.

    • If yes, no further action is required.
    • If no, go to 12.

Collect fault information.

  1. Log in to FusionInsight Manager as a user who has the FlinkServer management permission.
  2. Choose O&M > Alarm > Alarms > ALM-45642 RocksDB Continuously Triggers Write Traffic Limiting, view Location, and obtain the name of the task for which the alarm is generated.
  3. Choose Cluster > Services > Yarn and click the link next to ResourceManager WebUI to go to the native Yarn page.
  1. Locate the abnormal task based on its name displayed in Location, search for and record the application ID of the job, and check whether the job logs are available on the native Yarn page.

    Figure 1 Application ID of a job
    • If yes, go to 16.
    • If no, go to 18.

  2. Click the application ID of the failed job to go to the job page.

    1. Click Logs in the Logs column to view JobManager logs.
      Figure 2 Clicking Logs
    2. Click the ID in the Attempt ID column and click Logs in the Logs column to view and save TaskManager logs.
      Figure 3 Clicking the ID in the Attempt ID column
      Figure 4 Clicking Logs

      You can also log in to Manager as a user who has the FlinkServer management permission. Choose Cluster > Services > Flink, and click the link next to Flink WebUI. On the displayed Flink web UI, click Job Management, click More in the Operation column, and select Job Monitoring to view TaskManager logs.

  3. View the job logs to rectify the fault, or contact the O&M personnel and send the collected fault logs. No further action is required.

If logs are unavailable on the Yarn page, download logs from HDFS.

  1. On Manager, choose Cluster > Services > HDFS, click the link next to NameNode WebUI to go to the HDFS page, choose Utilities > Browse the file system, and download logs in the /tmp/logs/Username/bucket-logs-tfile/Last four digits of the task application ID/Application ID of the task directory.
  2. View the logs of the failed job to rectify the fault, or contact the O&M personnel and send the collected fault logs.

Alarm Clearance

This alarm is automatically cleared after the fault is rectified.

Related Information

None.