Updated on 2024-11-08 GMT+08:00

Viewing Flink Job Details

After creating a Flink job, you can check the basic information, job details, task list, and execution plan of the job on the DLI console.

This section describes how to check information about a Flink job.

Table 1 Viewing Flink job information

Type

Description

Instruction

Basic information

Includes the job ID, job type, job execution status, and more.

Viewing Basic Information

Job details

Includes SQL statements and the parameter settings for Flink Jar jobs.

Viewing Details

Job monitoring

You can use Cloud Eye to check job data input and output details.

Viewing Monitoring Information

Task list

You can view details about each task running on a job, including the task start time, number of received and transmitted bytes, and running duration.

Viewing the Task List

Execution plan

You can understand the operator flow direction of a running job.

Viewing the Execution Plan

Viewing Basic Information

In the navigation pane of the DLI console, choose Job Management > Flink Jobs. The Flink Jobs page displays all Flink jobs. You can check basic information about any Flink jobs in the list.

Table 2 Basic information about a Flink job

Parameter

Description

ID

ID of a submitted Flink job, which is generated by the system by default.

Name

Name of the submitted Flink job.

Type

Type of the submitted Flink job, which includes:

  • Flink SQL
  • Flink Jar
  • Flink OpenSource SQL

Status

Job status, which is subject to the console.

Description

Description of the submitted Flink job.

Username

Name of the user who submits the job.

Created

Time when the job was created.

Started

Time when the Flink job started to run.

Duration

Time consumed by job running.

Operation

  • Edit: Edit a created job.
  • Start: Start and run a job.
  • More
    • FlinkUI: Selecting this will display the Flink job execution page.
      NOTE:

      If you select FlinkUI immediately after submitting a job to a new queue, an empty projectID will be cached and the FlinkUI page cannot be displayed as it takes about 10 minutes to create a cluster in the background.

      You are advised to use a dedicated queue to ensure immediate availability of clusters for your jobs. Alternatively, select FlinkUI when the job is in the Running state.

    • Stop: Stop a Flink job. If it is grayed out, jobs in the current state cannot be stopped.
    • Delete: Delete a job.
      NOTE:

      Deleted jobs cannot be restored.

    • Modify Name and Description: You can modify the name and description of a job.
    • Import Savepoint: Import the data exported from the original Cloud Stream Service (CS) job.
    • Trigger Savepoint: You can select this operation for jobs in the Running state to save the job status.
    • Permissions: You can view the user permissions corresponding to the job and grant permissions to other users.
    • Runtime Configuration: You can enable Alarm Generation on Job Exception and Auto Restart on Exception.

Viewing Details

This section describes how to view job details. After you create and save a job, you can click the job name to view job details, including SQL statements and parameter settings. For a Jar job, you can only view its parameter settings.

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. Click the name of the job to be viewed. The Job Detail tab is displayed.

    In the Job Details tab, you can view SQL statements, configured parameters, and total cost for the job.

    The following uses a Flink SQL job as an example.
    Table 3 Parameter descriptions

    Parameter

    Description

    Type

    Job type, for example, Flink SQL

    Name

    Flink job name

    Description

    Description of a Flink job

    Status

    Running status of a job

    Running Mode

    The dedicated resource mode is used by default.

    Flink Version

    Version of Flink selected for the job.

    Queue

    Name of the queue where the Flink job runs

    UDF Jar

    This parameter is displayed when UDF Jar is set.

    Runtime Configuration

    Displayed when a user-defined parameter is added to a job

    CUs

    Number of CUs configured for a job

    Job Manager CUs

    Number of job manager CUs configured for a job.

    Parallelism

    Number of jobs that can be concurrently executed by a Flink job

    CU(s) per TM

    Number of CUs occupied by each Task Manager configured for a job

    Slot(s) per TM

    Number of Task Manager slots configured for a job

    OBS Bucket

    OBS bucket name. After Enable Checkpointing and Save Job Log are enabled, checkpoints and job logs are saved in this bucket.

    Save Job Log

    Whether the job running logs are saved to OBS

    Alarm on Job Exception

    Whether job exceptions are reported

    SMN Topic

    Name of the SMN topic. This parameter is displayed when Alarm Generation upon Job Exception is enabled.

    Auto Restart upon Exception

    Whether automatic restart is enabled.

    Max. Retry Attempts

    Maximum number of retry times upon an exception. Unlimited means the number is not limited.

    Restore Job from Checkpoint

    Whether the job can be restored from a checkpoint

    ID

    Job ID

    Savepoint

    OBS path of the savepoint

    Enable Checkpointing

    Whether checkpointing is enabled

    Checkpoint Interval

    Interval between storing intermediate job running results to OBS. The unit is second.

    Checkpoint Mode

    Checkpoint mode. Available values are as follows:

    • At least once: Events are processed at least once.
    • Exactly once: Events are processed only once.

    Idle State Retention Time

    Clears intermediate states of operators such as GroupBy, RegularJoin, Rank, and Depulicate that have not been updated after the maximum retention time. The default value is 1 hour.

    Dirty Data Policy

    Policy for processing dirty data. The value is displayed only when there is a dirty data policy. Available values are as follows:

    Ignore

    Trigger a job exception

    Save

    Dirty Data Dump Address

    OBS path for storing dirty data when Dirty Data Policy is set to Save.

    Created

    Time when a job is created

    Updated

    Time when a job was last updated

Viewing Monitoring Information

You can use Cloud Eye to view details about job data input and output.

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. Click the name of the job you want. The job details are displayed.

    Click Job Monitoring in the upper right corner of the page to switch to the Cloud Eye console.
    Figure 1 Monitoring a Job

    The following table describes monitoring metrics related to Flink jobs.

    Table 4 Monitoring metrics related to Flink jobs

    Name

    Description

    Flink Job Data Read Rate

    Displays the data input rate of a Flink job for monitoring and debugging. Unit: record/s.

    Flink Job Data Write Rate

    Displays the data output rate of a Flink job for monitoring and debugging. Unit: record/s.

    Flink Job Total Data Read

    Displays the total number of data inputs of a Flink job for monitoring and debugging. Unit: records

    Flink Job Total Data Write

    Displays the total number of output data records of a Flink job for monitoring and debugging. Unit: records

    Flink Job Byte Read Rate

    Displays the number of input bytes per second of a Flink job. Unit: byte/s

    Flink Job Byte Write Rate

    Displays the number of output bytes per second of a Flink job. Unit: byte/s

    Flink Job Total Read Byte

    Displays the total number of input bytes of a Flink job. Unit: byte

    Flink Job Total Write Byte

    Displays the total number of output bytes of a Flink job. Unit: byte

    Flink Job CPU Usage

    Displays the CPU usage of Flink jobs. Unit: %

    Flink Job Memory Usage

    Displays the memory usage of Flink jobs. Unit: %

    Flink Job Max Operator Latency

    Displays the maximum operator delay of a Flink job. The unit is ms.

    Flink Job Maximum Operator Backpressure

    Displays the maximum operator backpressure value of a Flink job. A larger value indicates severer backpressure.

    0: OK

    50: low

    100: high

Viewing the Task List

You can view details about each task running on a job, including the task start time, number of received and transmitted bytes, and running duration.

If the value is 0, no data is received from the data source.

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. Click the name of the job you want. The job details are displayed.
  3. On Task List and view the node information about the task.

    Figure 2 Task list
    View the operator task list. The following table describes the task parameters.
    Table 5 Parameter descriptions

    Parameter

    Description

    Name

    Name of an operator.

    Duration

    Running duration of an operator.

    Max Concurrent Jobs

    Number of parallel tasks in an operator.

    Task

    Operator tasks are categorized as follows:

    • The digit in red indicates the number of failed tasks.
    • The digit in light gray indicates the number of canceled tasks.
    • The digit in yellow indicates the number of tasks that are being canceled.
    • The digit in green indicates the number of finished tasks.
    • The digit in blue indicates the number of running tasks.
    • The digit in sky blue indicates the number of tasks that are being deployed.
    • The digit in dark gray indicates the number of tasks in a queue.

    Status

    Status of an operator task.

    Back Pressure Status

    Working load status of an operator. Available options are as follows:

    • OK: indicates that the operator is in normal working load.
    • LOW: indicates that the operator is in slightly high working load. DLI processes data quickly.
    • HIGH: indicates that the operator is in high working load. The data input speed at the source end is slow.

    Delay

    Duration from the time when source data starts being processed to the time when data reaches the current operator. The unit is millisecond.

    Sent Records

    Number of data records sent by an operator.

    Sent Bytes

    Number of bytes sent by an operator.

    Received Bytes

    Number of bytes received by an operator.

    Received Records

    Number of data records received by an operator.

    Started

    Time when an operator starts running.

    Ended

    Time when an operator stops running.

Viewing the Execution Plan

You can view the execution plan to understand the operator stream information about the running job.

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. Click the name of the job you want. The job details are displayed.
  3. Click the Execution Plan tab to view the operator flow direction.

    Figure 3 Execution plan
    Click a node. The corresponding information is displayed on the right of the page.
    • Scroll the mouse wheel to zoom in or out.
    • The stream diagram displays the operator stream information about the running job in real time.