DLI Flink Job
Function
The DLI Flink Job node is used to create and start jobs or check whether DLI jobs are running to analyze streaming big data in real time.
After a DLI Flink streaming job is submitted to DLI, if the job is in the running state, the node is successfully executed. If periodic scheduling is configured for the job, the system periodically checks whether the Flink job is still in the running state. If the Flink job is in the running state, the node is successfully executed.
Parameters
For details about how to configure the parameters of DLI Flink jobs, see the following:
- Property parameters:
- Advanced parameter: Table 5
Parameter |
Mandatory |
Description |
---|---|---|
Job Type |
Yes |
Select Existing Flink job. |
Job Name |
Yes |
Name of an existing DLI Flink job. |
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
Parameter |
Mandatory |
Description |
---|---|---|
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
Job Type |
Yes |
Select Flink SQL job. You can start a job by compiling SQL statements. |
Job Name |
Yes |
Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name. |
Job name must be prefixed with workspace name |
No |
Whether to add a workspace prefix to the created job. |
Script Path |
Yes |
Path to a Flink SQL script to be executed. If the script is not created, create and develop the Flink SQL script by referring to Creating a Script and Developing an SQL Script. |
Script Parameter |
No |
If the associated Flink SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression. If the parameters of the associated Flink SQL script are changed, click to refresh the parameters. |
UDF Jar |
No |
This parameter is valid only when you select a dedicated queue for Queue. Before selecting a UDF JAR resource package, upload the UDF JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource. In SQL, you can call a user-defined function that is inserted into a JAR package. |
DLI Queue |
Yes |
Shared queues are selected by default. You can also select a dedicated custom queue.
NOTE:
|
CUs |
Yes |
Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory. |
Concurrency |
Yes |
The number of Flink SQL jobs that run at the same time.
NOTE:
The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1). |
Auto Restart upon Exception |
No |
Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted. |
Parameters |
Mandatory |
Description |
---|---|---|
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
Job Type |
Yes |
Select Flink OpenSource SQL job. You can start a job by compiling SQL statements. |
Job Name |
Yes |
Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name. |
Job name must be prefixed with workspace name |
No |
Whether to add a workspace prefix to the created job. |
Script Path |
Yes |
Path to a Flink SQL script to be executed. If the script is not created, create and develop the Flink SQL script by referring to Creating a Script and Developing an SQL Script. |
Script Parameter |
No |
If the associated Flink SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression. If the parameters of the associated Flink SQL script are changed, click to refresh the parameters. |
UDF Jar |
No |
This parameter is valid only when you select a dedicated queue for Queue. Before selecting a UDF JAR resource package, upload the UDF JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource. In SQL, you can call a user-defined function that is inserted into a JAR package. |
DLI Queue |
Yes |
Shared queues are selected by default. You can also select a dedicated custom queue.
NOTE:
|
CUs |
Yes |
Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory. |
Concurrency |
Yes |
The number of Flink SQL jobs that run at the same time.
NOTE:
The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1). |
Auto Restart upon Exception |
No |
Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted. |
Parameter |
Mandatory |
Description |
---|---|---|
Job Type |
Yes |
Select User-defined Flink job. |
JAR Package |
Yes |
User-defined package. Before selecting a package, upload the JAR package to the OBS bucket and create resources on the Manage Resource page. For details, see Creating a Resource. |
Main Class |
Yes |
Name of the JAR package to be loaded, for example, KafkaMessageStreaming.
|
Main Class Parameter |
Yes |
List of parameters of a specified class. The parameters are separated by spaces. |
DLI Queue |
Yes |
Shared queues are selected by default. You can also select a dedicated custom queue.
NOTE:
|
Job Type |
No |
Select a custom image and the corresponding version. This parameter is available only when the DLI queue is a containerized queue. A custom image is a feature of DLI. You can use the Spark or Flink basic images provided by DLI to pack the dependencies (files, JAR packages, or software) required into an image using Dockerfile, generate a custom image, and release the image to SWR. Then, select the generated image and run the job. Custom images can change the container runtime environments of Spark and Flink jobs. You can embed private capabilities into custom images to enhance the functions and performance of jobs. For details about custom images, see Overview of Custom Images. |
CUs |
Yes |
Compute Unit (CU) is the pricing unit for DLI. A CU consists of 1 vCPU compute and 4 GB memory. |
Number of management node CUs |
Yes |
Set the number of CUs on a management unit. The value ranges from 1 to 4. The default value is 1. |
Concurrency |
Yes |
The number of Flink SQL jobs that run at the same time.
NOTE:
The value of Concurrency must not exceed the value obtained through the following formula: 4 x (Number of CUs – 1). |
Auto Restart upon Exception |
No |
Indicates whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted. |
Job Name |
Yes |
Name of the DLI Flink job. It must consist of 1 to 64 characters and contain only letters, numbers, and underscores (_). The default value is the same as the node name. |
Job name must be prefixed with workspace name |
No |
Whether to add a workspace prefix to the created job. |
Node Name |
Yes |
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>). |
Parameter |
Mandatory |
Description |
---|---|---|
Max. Node Execution Duration |
Yes |
Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will be executed again. |
Retry upon Failure |
Yes |
Whether to re-execute a node if it fails to be executed. Possible values:
|
Policy for Handling Subsequent Nodes If the Current Node Fails |
Yes |
Operation that will be performed if the node fails to be executed. Possible values:
|
Enable Dry Run |
No |
If you select this option, the node will not be executed, and a success message will be returned. |
Task Groups |
No |
Select a task group. If you select a task group, you can control the maximum number of concurrent nodes in the task group in a fine-grained manner in scenarios where a job contains multiple nodes, a data patching task is ongoing, or a job is rerunning. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot