Help Center/ MapReduce Service/ User Guide/ Alarm Reference (Available for MRS 3.x)/ ALM-45636 Number of Consecutive Checkpoint Failures of a Flink Job Exceeds the Threshold
Updated on 2024-07-23 GMT+08:00

ALM-45636 Number of Consecutive Checkpoint Failures of a Flink Job Exceeds the Threshold

This section applies to MRS 3.3.1 or later.

Alarm Description

The system checks the number of consecutive checkpoint failures based on the configured alarm checking interval. This alarm is generated when the number of consecutive checkpoint failures of a FlinkServer job reaches the configured threshold. This alarm is cleared when checkpoints are recovered or the job is successfully restarted.

Alarm Attributes

Alarm ID

Alarm Severity

Auto Cleared

45636

Major

Yes

Alarm Parameters

Type

Parameter

Description

Location Information

Source

Specifies the cluster for which the alarm was generated.

ServiceName

Specifies the service for which the alarm was generated.

ApplicationName

Specifies the name of the application for which the alarm was generated.

JobName

Specifies the job for which the alarm was generated.

UserName

Specifies the username for which the alarm was generated.

Additional Information

ThreshHoldValue

Specifies the threshold value for triggering the alarm.

CurrentValue

Specifies the value that triggered the alarm.

Impact on the System

The Flink job may fail. You need to check the status and logs of the Flink job to locate the fault. This is a job-level alarm and has no impact on FlinkServer.

Possible Causes

You can view failure causes in specific logs.

Handling Procedure

  1. Log in to Manager as a user who has the FlinkServer management permission.
  2. Choose Cluster > Services > Yarn and click the link next to ResourceManager WebUI to go to the native Yarn page.
  3. Locate the failed task based on its name displayed in Location, search for and record the application ID of the job, and check whether the job logs are available on the native Yarn page.

    Figure 1 Application ID of a job
    • If yes, go to 4.
    • If no, go to 6.

  4. Click the application ID of the failed job to go to the job page.

    1. Click Logs in the Logs column to view JobManager logs.
      Figure 2 Clicking Logs
    2. Click the ID in the Attempt ID column and click Logs in the Logs column to view TaskManager logs.
      Figure 3 Clicking the ID in the Attempt ID column
      Figure 4 Clicking Logs

      You can also log in to Manager as a user who has the FlinkServer management permission. Choose Cluster > Services > Flink, and click the link next to Flink WebUI. On the displayed Flink web UI, click Job Management, click More in the Operation column, and select Job Monitoring to view TaskManager logs.

  5. View the logs of the failed job to rectify the fault, or contact the O&M engineers and send the collected fault logs. No further action is required.

If logs are unavailable on the Yarn page, download logs from HDFS.

  1. On Manager, choose Cluster > Services > HDFS, click the link next to NameNode WebUI to go to the HDFS page, choose Utilities > Browse the file system, and download logs in the /tmp/logs/Username/logs/Application ID of the failed job directory.
  2. View the logs of the failed job to rectify the fault, or contact the O&M engineers and send the collected fault logs.

Alarm Clearance

This alarm is cleared when FlinkServer job checkpoints are recovered or the job is successfully restarted.

Related Information

None.