ALM-45636 Flink Job Checkpoints Keep Failing
This section applies to MRS 3.1.2 or a version between 3.1.2 and 3.3.0.
Description
The system checks the number of consecutive checkpoint failures based on the configured alarm checking interval. This alarm is generated when the number of consecutive checkpoint failures of a FlinkServer job reaches the configured threshold. This alarm is cleared when checkpoints are recovered or the job is successfully restarted.
Attribute
Alarm ID |
Alarm Severity |
Auto Clear |
---|---|---|
45636 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
JobName |
Specifies the job for which the alarm is generated. |
Username |
Specifies the username of the job for which the alarm is generated. |
Impact on the System
The Flink job may fail. You need to check the status and logs of the Flink job to locate the fault. This is a job-level alarm and has no impact on FlinkServer.
Possible Causes
You can view failure causes in specific logs.
Procedure
- Log in to Manager as a user who has the FlinkServer management permission.
- Choose Cluster > Services > Yarn and click the link next to ResourceManager WebUI to go to the Yarn page.
- Locate the failed job based on its name displayed in Location, search for and record the application ID of the failed job, and check whether the job logs are available on the Yarn page.
Figure 1 Application ID of a job
If yes, go to 4.
If no, go to 6.
- Click the application ID of the failed job to go to the job page.
- Click Logs in the Logs column to view JobManager logs.
Figure 2 Clicking Logs
- Click the ID in the Attempt ID column and click Logs in the Logs column to view TaskManager logs.
Figure 3 Clicking the ID in the Attempt ID column
Figure 4 Clicking Logs
You can also log in to Manager as a user who has the FlinkServer management permission, choose Cluster > Services > Flink, click the link next to Flink WebUI. On the displayed Flink web UI, click Job Management and choose More > Job Monitoring in the Operation column to view the TaskManager logs.
- Click Logs in the Logs column to view JobManager logs.
- View the logs of the failed job to rectify the fault, or contact the O&M personnel personnel and send the collected fault logs. No further action is required.
If logs are unavailable on the Yarn page, download logs from HDFS.
- On Manager, choose Cluster > Services > HDFS, click the link next to NameNode WebUI to go to the HDFS page, select Utilities > Browse the file system, and download logs in the /tmp/logs/User name/logs/Application ID of the failed job directory.
- View the logs of the failed job to rectify the fault, or contact the O&M personnel personnel and send the collected fault logs.
Alarm Clearing
This alarm is cleared when Flink job checkpoints are recovered or the job is successfully restarted.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot