Flink Job Sample
Overview
You can perform secondary development based on Flink and Spark APIs to build your own JAR packages and submit them to the DLI queues to implement interaction with MRS Kafka, HBase, Hive, HDFS, DWS, and DCS.
This section describes the interaction between user-defined job and MRS. For more sample code, see the DLI sample code.
Environment Preparations
- Log in to the MRS management console, create an MRS cluster, choose enable kerberos, and select kafka, hbase, hdfs. Enable the UDP/TCP port for the Security Group Rules.
- Enter MRS Manager page.
- Create a machine-machine account. Ensure that you have the hdfs_admin and hbase_admin permissions. Download the user authentication credentials, including the user.keytab and krb5.conf files.
- Click Services, download the client, and click OK.
- Download the configuration files from the MRS node, including hbase-site.xml and hiveclient.properties.
- Create an exclusive DLI queue.
To create a dedicated DLI queue, select Pay-per-use for Billing Mode and click Dedicated Resource Mode for Queue Type when purchasing a queue. For details, see Creating a Queue in the Data Lake Insight User Guide.
- Ensure that a datasource connection has been set up between the DLI dedicated queue and the MRS cluster, and security group rules have been configured based on the site requirements.
For details about how to create an enhanced datasource connection, see Enhanced Datasource Connections in the Data Lake Insight User Guide.
For details about how to configure security group rules, see Security Group in the Virtual Private Cloud User Guide.
- Obtain the IP address and domain name mapping of all nodes in the MRS cluster, and configure the host mapping in the host information of the DLI cross-source connection.
For details about how to add an IP-domain mapping, see Modifying the Host Information in the Data Lake Insight User Guide.
If the Kafka server listens on the port using hostname, you need to add the mapping between the hostname and IP address of the Kafka Broker node to the DLI queue. Contact the Kafka service deployment personnel to obtain the hostname and IP address of the Kafka Broker node.
Prerequisites
- Ensure that a dedicated queue has been created.
- When running a Flink Jar job, you need to build the secondary development application code into a Jar package and upload the JAR package to the created OBS bucket. In addition, create a package on the Data Management > Package Management page of DLI. For details, see Creating a Package.
DLI does not support the download function. If you need to modify the uploaded data file, edit the local file and upload it again.
- Flink dependencies have been built in the DLI server and security hardening has been performed based on the open-source community version. To prevent dependency compatibility issues or log output and dump issues, exclude the following files during packaging:
- Built-in dependencies (or set the package dependency scope to provided in Maven or sbt)
- Log configuration files (for example, log4j.properties or logback.xml)
- JAR packages for log output implementation (for example, log4j)
How to Use
Create and submit a Flink Jar job. For details, see Creating a Flink Jar Job in the Data Lake Insight User Guide.
- In the left navigation pane of the DLI management console, choose > . The Flink Jobs page is displayed.
- In the upper right corner of the Flink Jobs page, click Create Job. Figure 1 Creating a Flink Jar job
- Configure job parameters.
Table 1 Job parameters Parameter
Description
Type
Select Flink Jar.
Name
Job name, which contains 1 to 57 characters and consists of only letters, digits, hyphens (-), and underscores (_).
NOTE:The job name must be globally unique.
Description
Description of the job, which contains 0 to 512 characters.
Tags
Tags are used to identify cloud resources. A tag pair includes Tag key and Tag value. If you want to use the same tag to identify multiple cloud resources, that is, to select the same tag from the drop-down list box for all services, you are advised to create predefined tags on the Tag Management Service (TMS). For details, see the Tag Management Service User Guide.
NOTE:- A maximum of 10 tags can be added.
- Only one tag value can be added to a tag key.
- Tag key: Enter a tag key name in the text box. NOTE:
- A tag key contains a maximum of 36 characters. The first and last characters cannot be spaces. The following characters are not allowed: =*,<>\|/
- If there are predefined tags, you can select a tag from the drop-down list box.
- Tag value: Enter a tag value in the text box. NOTE:
- A tag value contains a maximum of 43 characters. The first and last characters cannot be spaces. The following characters are not allowed: =*,<>\|/
- If there are predefined tags, you can select a tag from the drop-down list box.
- Click OK to enter the page.
- Select a queue. A Flink Jar job can run only on general queues.
- A Flink Jar job can run only on a pre-created dedicated queue.
- If no dedicated queue is available in the Queue drop-down list, create a dedicated queue and bind it to the current user.
Figure 2 Selecting a queue
- Upload the JAR package. Figure 3 Uploading the JAR package
Table 2 Parameter description Name
Description
Application
User-defined package. Before selecting a package, upload the corresponding JAR package to the OBS bucket and create a package on the Data Management > Package Management page. For details, see Creating a Package.
Main Class
Name of the main class of the JAR package to be loaded, for example, KafkaMessageStreaming.
- Default: The value is specified based on the Manifest file in the JAR package.
- Manually assign: You must enter the class name and confirm the class arguments (separate arguments with spaces).
NOTE:When a class belongs to a package, the package path must be carried, for example, packagePath.KafkaMessageStreaming.
Class Arguments
List of arguments of a specified class. The arguments are separated by spaces.
JAR Package Dependencies
User-defined dependencies. Before selecting a package, upload the corresponding JAR package to the OBS bucket and create a JAR package on the Data Management > Package Management page. For details, see Creating a Package.
Other Dependencies
User-defined dependency files. Before selecting a file, upload the corresponding file to the OBS bucket and create a package of any type on the Data Management > Package Management page. For details, see Creating a Package.
You can add the following content to the application to access the corresponding dependency file: fileName indicates the name of the file to be accessed, and ClassName indicates the name of the class that needs to access the file.
ClassName.class.getClassLoader().getResource("userData/fileName")Job Type
This parameter is displayed when the queue type is CCE.
- Basic
- Image: Select the image name and image version. Images are set on the Software Repository for Container (SWR) console. For details, see the SoftWare Repository for Container User Guide.
Flink Version
Before selecting a Flink version, you need to select the queue to which the Flink version belongs. Currently, the following versions are supported: 1.10 and 1.11.
- Configure job parameters. Figure 4 Configuring parameters
Table 3 Parameter description Parameter
Description
CUs
One CU has one vCPU and 4 GB memory. The number of CUs ranges from 2 to 400.
Job Manager CUs
Set the number of CUs on a management unit. The value ranges from 1 to 4. The default value is 1.
Max Concurrent Jobs
Maximum number of parallel operators in a job.
NOTE:- The value must be less than or equal to four times the number of compute units (CUs minus the number of job manager CUs).
- You are advised to set this parameter to a value greater than that configured in the code. Otherwise, job submission may fail.
Task Manager Configuration
Whether to set Task Manager resource parameters.
If this option is selected, you need to set the following parameters:
- CU(s) per TM: Number of resources occupied by each Task Manager.
- Slot(s) per TM: Number of slots contained in each Task Manager.
Save Job Log
Whether to save the job running logs to OBS.
If this option is selected, you need to set the following parameters:
OBS Bucket: Select an OBS bucket to store user job logs. If the selected OBS bucket is not authorized, click Authorize.
Alarm Generation upon Job Exception
Whether to report job exceptions, for example, abnormal job running or exceptions due to an insufficient balance, to users via SMS or email.
If this option is selected, you need to set the following parameters:
SMN Topic
Select a user-defined SMN topic. For details about how to customize SMN topics, see "Creating a Topic" in the Simple Message Notification User Guide.
Auto Restart upon Exception
Whether to enable automatic restart. If this function is enabled, jobs will be automatically restarted and restored when exceptions occur.
If this option is selected, you need to set the following parameters:
- Max. Retry Attempts: maximum number of retry times upon an exception. The unit is Times/hour.
- Unlimited: The number of retries is unlimited.
- Limited: The number of retries is user-defined.
- Restore Job from Checkpoint: Restore the job from the saved checkpoint.
If you select this parameter, you also need to set Checkpoint Path.
Checkpoint Path: Select the checkpoint saving path. The value must be the same as the checkpoint path you set in the application package. Note that the checkpoint path for each job must be unique. Otherwise, the checkpoint cannot be obtained.
- Click Save on the upper right of the page.
- Click Start on the upper right side of the page. On the displayed Start Flink Job page, confirm the job specifications and the price, and click Start Now to start the job.
After the job is started, the system automatically switches to the page, and the created job is displayed in the job list. You can view the job status in the column. After a job is successfully submitted, the job status will change from to . After the execution is complete, the message Completed is displayed.
If the job status is or , the job submission failed or the job did not execute successfully. In this case, you can move the cursor over the status icon in the column of the job list to view the error details. You can click
to copy these details. After handling the fault based on the provided information, resubmit the job.
Other buttons are as follows:
Save As: Save the created job as a new job.
Last Article: Creating a Graph in Tableau Desktop
Next Article: Stream Ecosystem Development Guide
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.