Updated on 2023-10-19 GMT+08:00

Overview

On the Job Management page of Flink jobs, you can submit a Flink job. Currently, the following job types are supported:

  • Flink SQL uses SQL statements to define jobs and can be submitted to any general purpose queue.
  • Flink Jar customizes a JAR package job based on Flink APIs. It runs on dedicated queues.
  • Flink OpenSource SQL is compatible with the SQL syntax of the community Flink 1.10 version and can run on CCE queues only.

Flink job management provides the following functions:

Assigning Agency Permissions

Agencies are required for DLI to execute Flink jobs. You can set the agency when logging in to the management console for the first time or go to Global Configurations > Service Authorization to modify the agencies.

The permissions are as follows:

  • Tenant Administrator (global) permissions are required to access data from OBS to execute Flink jobs on DLI, for example, obtaining OBS/GaussDB(DWS) data sources, log dump (including bucket authorization), checkpointing enabling, and job import and export.

    Due to cloud service cache differences, permission setting operations require about 60 minutes to take effect.

  • DIS Administrator permissions are required to use DIS data as the data source of DLI Flink jobs.

    Due to cloud service cache differences, permission setting operations require about 30 minutes to take effect.

  • To use CloudTable data as the data source of DLI Flink jobs, CloudTable Administrator permissions are required.

    Due to cloud service cache differences, permission setting operations require about 3 minutes to take effect.

Flink Jobs Page

On the Overview page, click Flink Jobs to go to the Flink job management page. Alternatively, you can choose Job Management > Flink Jobs from the navigation pane on the left. The page displays all Flink jobs. If there are a large number of jobs, they will be displayed on multiple pages. DLI allows you to view jobs in all statuses.

Table 1 Job management parameters

Parameter

Description

ID

ID of a submitted Flink job, which is generated by the system by default.

Name

Name of the submitted Flink job.

Type

Type of the submitted Flink job. Including:

  • Flink SQL: Flink SQL jobs
  • Flink Jar: Flink Jar jobs
  • Flink OpenSource SQL: Flink OpenSource SQL jobs

Status

Job statuses, including:

  • Draft
  • Submitting
  • Submission failed
  • Running: The billing starts. After the job is submitted, a normal result is returned.
  • Running exception: The billing stops. The job stops running due to an exception.
  • Downloading
  • Idle
  • Stopping
  • Stopped
  • Stopping failed
  • Creating the savepoint
  • Stopped due to arrears: The billing ends. The job is stopped because the user account is in arrears.
  • Restoring (recharged jobs): The account in arrears is recharged, and the job is being restored.
  • Completed

Description

Description of the submitted Flink job.

Username

Name of the user who submits a job.

Created

Time when a job is created.

Started

Time when a Flink job starts to run.

Duration

Time consumed by job running.

Operation

  • Edit: Edit a created job. For details, see Editing a Job.
  • Start: Start and run a job. For details, see Starting a Job.
  • More
    • FlinkUI: After you click this button, the Flink job execution page is displayed.
      NOTE:

      When you execute a job on a created queue, the cluster is restarted. It takes about 10 minutes. If you click FlinkUI before the cluster is created, an empty projectID will be cached. The FlinkUI page cannot be displayed.

      You are advised to use a dedicated queue so that the cluster will not be released. Alternatively, wait for a while after the job is submitted (the cluster is created), and then check FlinkUI.

    • Stop: Stop a Flink job. If this function is unavailable, jobs in the current status cannot be stopped.
    • Delete: Delete a job.
      NOTE:

      A deleted job cannot be restored.

    • Modify Name and Description: You can modify the name and description of a job. For details, see Modifying Name and Description.
    • Import Savepoint: Import the data exported from the original CS job. For details, see Importing to a Savepoint.
    • Trigger Savepoint: You can click this button for jobs in the Running status to save the job status. For details, see Triggering a Savepoint.
    • Permissions: You can view the user permissions corresponding to the job and grant permissions to other users. For details, see Managing Flink Job Permissions.
    • Runtime Configuration: You can enable Alarm Generation upon Job Exception and Auto Restart upon Exception. For details, see Runtime Configuration.