MRS Flink Job
Functions
The MRS Flink Job node is used to execute the Flink SQL script and Flink job predefined in DataArts Factory.
For details about how to use the MRS Flink Job node, see Developing an MRS Flink Job.
Parameters
Table 1 and Table 2 describe the parameters of the MRS Flink node.
Parameter |
Mandatory |
Description |
---|---|---|
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
Job Type |
Yes |
The following options are available:
|
Script Path |
Yes |
This parameter is available when you select Flink SQL job for Job Type. Select the Flink SQL script to be executed. If no Flink SQL script is available, create and develop one by referring to Creating a Script and Developing an SQL Script. |
Script Parameter |
No |
This parameter is available when you select Flink SQL job for Job Type. If the associated SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression. If the parameters of the associated SQL script are changed, click to refresh the parameters. |
Process Type |
Yes |
Set the mode of the Flink job.
Note that this parameter only specifies the processing mode. You must set parameters for the selected mode. |
MRS Cluster Name |
Yes |
Select an MRS cluster.
To create an MRS cluster, use either of the following methods:
|
Job Name |
Yes |
MRS job name. It can contain a maximum of 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed. The system can automatically enter a job name in Job name_Node name format.
NOTE:
The job name cannot contain Chinese characters or more than 64 characters. If the job name does not meet requirements, the MRS job will fail to be submitted. |
Job Resource Package |
Yes |
Select a JAR package. Before selecting a JAR package, upload the JAR package to the OBS bucket, create a resource on the Manage Resource page, and add the JAR package to the resource management list. For details, see Creating a Resource. |
Job Execution Parameter |
No |
Key parameter of the program that executes the Flink job. This parameter is specified by a function in the user program. Multiple parameters are separated by space. |
MRS Resource Queue |
No |
Select a created MRS resource queue.
NOTE:
Select a queue you configured in the queue permissions of DataArts Security. If you set multiple resource queues for this node, the resource queue you select here has the highest priority. |
Program Parameter |
No |
Used to configure optimization parameters such as threads, memory, and vCPUs for the job to optimize resource usage and improve job execution performance.
NOTE:
This parameter is mandatory if the cluster version is MRS 1.8.7 or later than MRS 2.0.1. For details about the program parameters of MRS Spark jobs, see Running a Flink Job in the MapReduce Service User Guide. |
Input Data Path |
No |
Path where the input data resides. |
Output Data Path |
No |
Path where the output data resides. |
Parameter |
Mandatory |
Description |
---|---|---|
Max. Node Execution Duration |
Yes |
Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again. |
Retry upon Failure |
Yes |
Whether to re-execute a node if it fails to be executed. Possible values:
|
Policy for Handling Subsequent Nodes If the Current Node Fails |
Yes |
Operation that will be performed if the node fails to be executed. Possible values:
|
Enable Dry Run |
No |
If you select this option, the node will not be executed, and a success message will be returned. |
Task Groups |
No |
Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot