Updated on 2024-07-11 GMT+08:00

MRS Flink Job

Functions

The MRS Flink Job node is used to execute the Flink SQL script and Flink job predefined in DataArts Factory.

For details about how to use the MRS Flink Job node, see Developing an MRS Flink Job.

Parameters

Table 1 and Table 2 describe the parameters of the MRS Flink node.

Table 1 Parameters of the MRS Flink node

Parameter

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Job Type

Yes

The following options are available:

  • Flink SQL job
  • User-defined Flink job

Script Path

Yes

This parameter is available when you select Flink SQL job for Job Type.

Select the Flink SQL script to be executed. If no Flink SQL script is available, create and develop one by referring to Creating a Script and Developing an SQL Script.

Script Parameter

No

This parameter is available when you select Flink SQL job for Job Type.

If the associated SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression.

If the parameters of the associated SQL script are changed, click to refresh the parameters.

Process Type

Yes

Set the mode of the Flink job.
  • Batch: The node waits for the Flink job execution to complete.
  • Stream: The node is executed as long as the job is successfully started. Each time the job is scheduled in the future, the system checks whether the job is in running state. If the job is in running state, it is successfully executed.

Note that this parameter only specifies the processing mode. You must set parameters for the selected mode.

MRS Cluster Name

Yes

Select an MRS cluster.

To create an MRS cluster, use either of the following methods:
  • Click . On the Clusters page, create an MRS cluster.
  • Go to the MRS console to create an MRS cluster.
    NOTE:

    Currently, MRS Flink jobs support MRS 3.2.0-LTS.1 and later versions.

Job Name

Yes

MRS job name. It can contain a maximum of 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

The system can automatically enter a job name in Job name_Node name format.

NOTE:

The job name cannot contain Chinese characters or more than 64 characters. If the job name does not meet requirements, the MRS job will fail to be submitted.

Job Resource Package

Yes

Select a JAR package. Before selecting a JAR package, upload the JAR package to the OBS bucket, create a resource on the Manage Resource page, and add the JAR package to the resource management list. For details, see Creating a Resource.

Job Execution Parameter

No

Key parameter of the program that executes the Flink job. This parameter is specified by a function in the user program. Multiple parameters are separated by space.

MRS Resource Queue

No

Select a created MRS resource queue.

NOTE:

Select a queue you configured in the queue permissions of DataArts Security. If you set multiple resource queues for this node, the resource queue you select here has the highest priority.

Program Parameter

No

Used to configure optimization parameters such as threads, memory, and vCPUs for the job to optimize resource usage and improve job execution performance.

NOTE:

This parameter is mandatory if the cluster version is MRS 1.8.7 or later than MRS 2.0.1.

For details about the program parameters of MRS Spark jobs, see Running a Flink Job in the MapReduce Service User Guide.

Input Data Path

No

Path where the input data resides.

Output Data Path

No

Path where the output data resides.

Table 2 Advanced parameters

Parameter

Mandatory

Description

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again.

Retry upon Failure

Yes

Whether to re-execute a node if it fails to be executed. Possible values:

  • Yes: The node will be re-executed, and the following parameters must be configured:
    • Retry upon Timeout
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node will not be re-executed. This is the default setting.
    NOTE:

    If retry is configured for a job node and the timeout duration is configured, the system allows you to retry a node when the node execution times out.

    If a node is not re-executed when it fails upon timeout, you can go to the Default Configuration page to modify this policy.

    Retry upon Timeout is displayed only when Retry upon Failure is set to Yes.

Policy for Handling Subsequent Nodes If the Current Node Fails

Yes

Operation that will be performed if the node fails to be executed. Possible values:

  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.
  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend the current job execution plan: If the current job instance is in abnormal state, the subsequent nodes of this node and the subsequent job instances that depend on the current job are in waiting state.

Enable Dry Run

No

If you select this option, the node will not be executed, and a success message will be returned.

Task Groups

No

Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning.