Help Center/ MapReduce Service/ User Guide/ MRS Cluster O&M/ MRS Cluster Alarm Handling Reference/ ALM-45639 Checkpointing of a Flink Job Times Out
Updated on 2024-09-23 GMT+08:00

ALM-45639 Checkpointing of a Flink Job Times Out

Description

The system checks the checkpointing timeout of Flink jobs every 30 seconds. This alarm is generated if the checkpointing timeout of a Flink job is longer than the threshold (600 seconds by default). This alarm is cleared when the checkpointing timeout of a job is less than or equal to the threshold.

Attribute

Alarm ID

Alarm Severity

Auto Clear

45639

Minor

Yes

Parameters

Name

Meaning

Source

Specifies the cluster for which the alarm is generated.

ServiceName

Specifies the service for which the alarm is generated.

ApplicationName (available in MRS 3.2.1 or later)

Specifies the name of the application for which the alarm is generated.

JobName

Specifies the job for which the alarm is generated.

UserName

Specifies the username for which the alarm is generated.

Impact on the System

The checkpointing fails. You need to locate the cause. This is a job-level alarm and has no impact on FlinkServer.

Possible Causes

The job may be in the sub-healthy state. The possible causes are as follows:

  • The memory for the TaskManager of the job is insufficient.
  • The state memory is too large, making checkpointing time-consuming.

Procedure

  1. Log in to Manager as a user who has the FlinkServer management permission.
  2. Choose O&M > Alarm > Alarms > ALM-45639 Checkpointing of a Flink Job Times Out, view Location, and obtain the name of the task for which the alarm is generated.
  3. Choose Cluster > Services > Yarn and click the link next to ResourceManager WebUI to go to the native Yarn page.
  4. Locate the failed task based on its name displayed in Location, search for and record the application ID of the job, and check whether the job logs are available on the Yarn page.

    Figure 1 Application ID of a job
    • If yes, go to 5.
    • If no, go to 7.

  5. Click the application ID of the failed job to go to the job page.

    1. Click Logs in the Logs column to view JobManager logs.
      Figure 2 Clicking Logs
    2. Click the ID in the Attempt ID column and click Logs in the Logs column to view TaskManager logs.
      Figure 3 Clicking the ID in the Attempt ID column
      Figure 4 Clicking Logs

      You can also log in to Manager as a user who has the FlinkServer management permission. Choose Cluster > Services > Flink, and click the link next to Flink WebUI. On the displayed Flink web UI, click Job Management, click More in the Operation column, and select Job Monitoring to view TaskManager logs.

  6. View the logs of the failed job to rectify the fault, or contact the O&M personnel and send the collected fault logs. No further action is required.

If logs are unavailable on the Yarn page, download logs from HDFS.

  1. On Manager, choose Cluster > Services > HDFS, click the link next to NameNode WebUI to go to the HDFS page, choose Utilities > Browse the file system, and download logs in the /tmp/logs/Username/logs/Application ID of the failed job directory.
  2. View the logs of the failed job to rectify the fault, or contact the O&M personnel and send the collected fault logs.

Alarm Clearing

This alarm is cleared when the checkpointing timeout a Flink job is less than or equal to the threshold.

Related Information

None