Updated on 2022-09-23 GMT+08:00

MRS Flink Job

Functions

The MRS Flink node is used to execute predefined Flink jobs in MRS.

Parameters

Table 1 and Table 2 describe the parameters of the MRS Flink node.

Table 1 Parameters of the MRS Flink node

Parameter

Mandatory

Description

Node Name

Yes

Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

MRS Cluster Name

Yes

Select the MRS cluster.

To create an MRS cluster, use either of the following methods:
  • Click . On the Clusters page, create an MRS cluster.
  • Go to the MRS console to create an MRS cluster.

Job Name

Yes

Name of an MRS job. The name contains 1 to 64 characters, including only letters, digits, and underscores (_).

NOTE:

The job name cannot contain Chinese characters or more than 64 characters. If the job name does not meet requirements, the MRS job will fail to be submitted.

Job Resource Package

Yes

Select a JAR package. Before selecting a JAR package, upload the JAR package to the OBS bucket, create a resource on the Manage Resource page, and add the JAR package to the resource management list. For details, see Creating a Resource.

Job Execution Parameter

No

Key parameter of the program that executes the Flink job. This parameter is specified by a function in the user program. Multiple parameters are separated by space.

Program Parameter

No

Used to configure optimization parameters such as threads, memory, and vCPUs for the job to optimize resource usage and improve job execution performance.

NOTE:

This parameter is mandatory if the cluster version is MRS 1.8.7 or later than MRS 2.0.1.

For details on the program parameters of MRS Spark jobs, see Running a Flink Job in the MapReduce Service User Guide.

Input Data Path

No

Path where the input data resides.

Output Data Path

No

Path where the output data resides.

Table 2 Advanced parameters

Parameter

Mandatory

Description

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will not be retried and is set to the failed state.

Retry upon Failure

Yes

Indicates whether to re-execute a node task if its execution fails. Possible values:

  • Yes: The node task will be re-executed, and the following parameters must be configured:
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node task will not be re-executed. This is the default setting.
NOTE:

If Timeout Interval is configured for the node, the node will not be executed again after the execution times out. Instead, the node is set to the failure state.

Failure Policy

Yes

Operation that will be performed if the node task fails to be executed. Possible values:

  • End the current job execution plan: stops running the current job. The job instance status is Failed.
  • Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
  • Suspend current job execution plan: suspends running the current job. The job instance status is Waiting.
  • Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.