Updated on 2024-04-29 GMT+08:00

DLI Flink Job

Function

The DLI Flink Job node is used to create and start jobs or check whether DLI jobs are running to analyze streaming big data in real time.

After a DLI Flink streaming job is submitted to DLI, if the job is in the running state, the node is successfully executed. If periodic scheduling is configured for the job, the system periodically checks whether the Flink job is still in the running state. If the Flink job is in the running state, the node is successfully executed.

Parameters

For details about how to configure the parameters of DLI Flink jobs, see the following:

  • Property parameters:
    If the job is a Flink SQL job, Flink OpenSource SQL job, or custom Flink job, the system creates and starts the job based on the job status configured on the node.
    • Existing Flink job: For details, see Table 1.
    • Flink SQL job: For details, see Table 2.
    • Flink OpenSource SQL job: For details, see Table 3.
    • User-defined Flink job: For details, see Table 4.
  • Advanced parameter: Table 5
Table 1 Parameter parameters of an existing Flink job

Parameter

Mandatory

Description

Job Type

Yes

Select Existing Flink job.

Job Name

Yes

Name of an existing DLI Flink job.

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Table 2 Property parameters of a Flink SQL job

Parameter

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Job Type

Yes

Select Flink SQL job. You can start a job by compiling SQL statements.

Job Name

Yes

Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name.

Job name must be prefixed with workspace name

No

Whether to add a workspace prefix to the created job.

Script Path

Yes

Path to a Flink SQL script to be executed. If the script is not created, create and develop the Flink SQL script by referring to Creating a Script and Developing an SQL Script.

Script Parameter

No

If the associated Flink SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression.

If the parameters of the associated Flink SQL script are changed, click to refresh the parameters.

UDF Jar

No

This parameter is valid only when you select a dedicated queue for Queue. Before selecting a UDF JAR resource package, upload the UDF JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource.

In SQL, you can call a user-defined function that is inserted into a JAR package.

DLI Queue

Yes

Shared queues are selected by default. You can also select a dedicated custom queue.

NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

CUs

Yes

Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory.

Concurrency

Yes

The number of Flink SQL jobs that run at the same time.

NOTE:

The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1).

Auto Restart upon Exception

No

Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted.

Table 3 Property parameters of a Flink OpenSource SQL job

Parameters

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Job Type

Yes

Select Flink OpenSource SQL job. You can start a job by compiling SQL statements.

Job Name

Yes

Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name.

Job name must be prefixed with workspace name

No

Whether to add a workspace prefix to the created job.

Script Path

Yes

Path to a Flink SQL script to be executed. If the script is not created, create and develop the Flink SQL script by referring to Creating a Script and Developing an SQL Script.

Script Parameter

No

If the associated Flink SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression.

If the parameters of the associated Flink SQL script are changed, click to refresh the parameters.

UDF Jar

No

This parameter is valid only when you select a dedicated queue for Queue. Before selecting a UDF JAR resource package, upload the UDF JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource.

In SQL, you can call a user-defined function that is inserted into a JAR package.

DLI Queue

Yes

Shared queues are selected by default. You can also select a dedicated custom queue.

NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

CUs

Yes

Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory.

Concurrency

Yes

The number of Flink SQL jobs that run at the same time.

NOTE:

The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1).

Auto Restart upon Exception

No

Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted.

Table 4 Property parameters of a user-defined Flink job

Parameter

Mandatory

Description

Job Type

Yes

Select User-defined Flink job.

JAR Package

Yes

User-defined package. Before selecting a package, upload the JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource.

Main Class

Yes

Name of the JAR package to be loaded, for example, KafkaMessageStreaming.

  • Default: Specified based on the Manifest file in the JAR package.
  • Manually assign: Enter the class name and confirm the class arguments (separate arguments with spaces).
    NOTE:

    When a class belongs to a package, the package path must be carried, for example, packagePath.KafkaMessageStreaming.

Main Class Parameter

Yes

List of parameters of a specified class. The parameters are separated by spaces.

DLI Queue

Yes

Shared queues are selected by default. You can also select a dedicated custom queue.

NOTE:
  • During job creation, a sub-user can only select a queue that has been allocated to the user.
  • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
  • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

Job Type

No

Select a custom image and the corresponding version. This parameter is available only when the DLI queue is a containerized queue.

A custom image is a feature of DLI. You can use the Spark or Flink basic images provided by DLI to pack the dependencies (files, JAR packages, or software) required into an image using Dockerfile, generate a custom image, and release the image to SWR. Then, select the generated image and run the job.

Custom images can change the container runtime environments of Spark and Flink jobs. You can embed private capabilities into custom images to enhance the functions and performance of jobs. For details about custom images, see Overview of Custom Images.

CUs

Yes

Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory.

Number of management node CUs

Yes

Set the number of CUs on a management unit. The value ranges from 1 to 4. The default value is 1.

Concurrency

Yes

The number of Flink SQL jobs that run at the same time.

NOTE:

The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1).

Auto Restart upon Exception

No

Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted.

Job Name

Yes

Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name.

Job name must be prefixed with workspace name

No

Whether to add a workspace prefix to the created job.

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Table 5 Advanced parameters

Parameter

Mandatory

Description

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again.

Retry upon Failure

Yes

Whether to re-execute a node if it fails to be executed. Possible values:

  • Yes: The node will be re-executed, and the following parameters must be configured:
    • Retry upon Timeout
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
    NOTE:

    If retry is configured for a job node and the timeout duration is configured, the system allows you to retry a node when the node execution times out.

    If a node is not re-executed when it fails upon timeout, you can go to the Default Configuration page to modify this policy.

    Retry upon Timeout is displayed only when Retry upon Failure is set to Yes.

Policy for Handling Subsequent Nodes If the Current Node Fails

Yes

Operation that will be performed if the node fails to be executed. Possible values:

  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.
  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend the current job execution plan: If the current job instance is in abnormal state, the subsequent nodes of this node and the subsequent job instances that depend on the current job are in waiting state.

Enable Dry Run

No

If you select this option, the node will not be executed, and a success message will be returned.