Updated on 2025-08-15 GMT+08:00

Creating a Flink Jar Job

A Flink Jar job involves developing a custom application Jar package based on Flink's capabilities and submitting it to a DLI queue for execution.

To create a Flink Jar job, you need to write and build your own application Jar package. This is suitable for users who require stream data processing and are proficient in Flink's secondary development capabilities.

This section describes how to create a Flink Jar job on the DLI management console.

Prerequisites

  • When you use a Flink Jar job to access other external data sources, such as OpenTSDB, HBase, Kafka, GaussDB(DWS), RDS, CSS, CloudTable, DCS Redis, and DDS, you need to create a datasource connection to connect the job running queue to the external data source.
  • To run a Flink Jar job, you need to build your custom application code into a JAR file and upload it to the OBS bucket that has already been created.
  • Flink dependencies have been built in the DLI server and security hardening has been performed based on the open-source community version. To avoid dependency package compatibility issues or log output and dump issues, be careful to exclude the following files when packaging:
    • Built-in dependencies (or set the package dependency scope to provided in Maven or SBT)
    • Log configuration files (example, log4j.properties/logback.xml)
    • JAR package for log output implementation (example, log4j).

Precautions

Before creating and submitting jobs, you are advised to enable CTS to record DLI operations for queries, audits, and tracking. To view the DLI operations that can be recorded by CTS, see Using CTS to Audit DLI.

For how to enable CTS and view trace details, see Cloud Trace Service Getting Started.

Creating a Flink Jar Job

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. In the upper right corner of the Flink Jobs page, click Create Job.

    Figure 1 Creating a Flink Jar job

  3. Specify job parameters.

    Table 1 Job configuration information

    Parameter

    Description

    Type

    Select Flink Jar.

    Name

    Job name. The value can contain up to 57 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

    NOTE:

    The job name must be globally unique.

    Description

    Job description. It can contain up to 512 characters.

    Tags

    Tags used to identify cloud resources. A tag includes the tag key and tag value. If you want to use the same tag to identify multiple cloud resources, that is, to select the same tag from the drop-down list box for all services, you are advised to create predefined tags on the Tag Management Service (TMS).

    If your organization has configured tag policies for DLI, add tags to resources based on the policies. If a tag does not comply with the tag policies, resource creation may fail. Contact your organization administrator to learn more about tag policies.

    For details, see Tag Management Service User Guide.

    NOTE:
    • A maximum of 20 tags can be added.
    • Only one tag value can be added to a tag key.
    • The key name in each resource must be unique.
    • Tag key: Enter a tag key name in the text box.
      NOTE:

      A tag key can contain a maximum of 128 characters. Only letters, digits, spaces, and special characters (_.:+-@) are allowed, but the value cannot start or end with a space or start with _sys_.

    • Tag value: Enter a tag value in the text box.
      NOTE:

      A tag value can contain a maximum of 255 characters. Only letters, digits, spaces, and special characters (_.:+-@) are allowed.

  4. Click OK to enter the editing page.
  5. Select a queue.
  6. Configuring Flink Jar Job parameters

    Figure 2 Configuring Flink Jar Job parameters
    Table 2 Parameter descriptions

    Parameter

    Description

    Queue

    Select a queue where you want to run your job.

    Flink Version

    Flink version used for job running. Flink versions have varying feature support.

    If you choose to use Flink 1.15, make sure to configure the agency information for the cloud service that DLI is allowed to access in the job.

    For the syntax of Flink 1.15, see Flink OpenSource SQL 1.15 Usage and Flink OpenSource SQL 1.15 Syntax.

    For the syntax of Flink 1.12, see Flink OpenSource SQL 1.12 Syntax.

    NOTE:

    You are advised not to use Flink of different versions for a long time.

    • Doing so can lead to code incompatibility, which can negatively impact job execution efficiency.
    • Doing so may result in job execution failures due to conflicts in dependencies. Jobs rely on specific versions of libraries or components.

    Application

    Select a Jar job package.

    There are the following ways to manage JAR files:

    • Upload packages to OBS: Upload Jar packages to an OBS bucket in advance and select the corresponding OBS path.
    • Upload packages to DLI: Upload JAR files to an OBS bucket in advance and create a package on the Data Management > Package Management page of the DLI management console. For details, see Creating a DLI Package.

    For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

    Main Class

    The name of the JAR package to be loaded, for example, KafkaMessageStreaming.

    • Default: Specified based on the Manifest file in the JAR package.
    • Manually assign: You must enter the class name and confirm the class arguments (separated by spaces).
    NOTE:

    When a class belongs to a package, the main class path must contain the complete package path, for example, packagePath.KafkaMessageStreaming.

    Class Arguments

    List of arguments of a specified class. The arguments are separated by spaces.

    Flink parameters support replacement of non-sensitive global variables. For example, if you add the global variable windowsize in Global Configuration > Global Variables, you can add the -windowsSize {{windowsize}} parameter for the Flink Jar job.

    JAR Package Dependencies

    Select a user-defined package dependency. The dependent program packages are stored in the classpath directory of the cluster.

    There are the following ways to manage JAR files:

    • Upload packages to OBS: Upload Jar packages to an OBS bucket in advance and select the corresponding OBS path.
    • Upload packages to DLI: Upload JAR files to an OBS bucket in advance and create a package on the Data Management > Package Management page of the DLI management console. For details, see Creating a DLI Package.

    For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

    When creating a JAR file for a Flink Jar job, you do not need to upload existing built-in dependency packages to avoid package information conflicts.

    For details about built-in dependency packages, see DLI Built-in Dependencies.

    Other Dependencies

    User-defined dependency files. Other dependency files need to be referenced in the code.

    There are the following ways to manage dependency files:

    • Upload packages to OBS: Upload dependency files to an OBS bucket in advance and select the corresponding OBS path.
    • Upload packages to DLI: Upload dependency files to an OBS bucket in advance and create a package on the Data Management > Package Management page of the DLI management console. For details, see Creating a DLI Package.

    For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

    You can add the following command to the application to access the corresponding dependency file. In the command, fileName indicates the name of the file to be accessed, and ClassName indicates the name of the class that needs to access the file.

    ClassName.class.getClassLoader().getResource("userData/fileName")

    Job Type

    Image type used for creating a Flink Jar job. It is used to specify the image type of the DLI container cluster.

    Agency

    If you choose Flink 1.15 or later to execute your job, you can create a custom agency to allow DLI to access other services.

    For how to create a custom agency, see Creating a Custom DLI Agency.

    Runtime Configuration

    User-defined optimization parameters. The parameter format is key=value.

    Flink optimization parameters support replacement non-sensitive global variable. For example, if you create global variable phase in Global Configuration > Global Variables, optimization parameter table.optimizer.agg-phase.strategy={{phase}} can be added to the Flink Jar job.

    Flink 1.15 supports minimal submission of Flink Jar jobs. Enable this by configuring flink.dli.job.jar.minimize-submission.enabled=true in the runtime optimization parameters.

    NOTE:

    Minimal submission means Flink only submits the necessary job dependencies, not the entire Flink environment. By setting the scope of non-Connector Flink dependencies (starting with flink-) and third-party libraries (like Hadoop, Hive, Hudi, and MySQL-CDC) to provided, you ensure these dependencies are excluded from the Jar job, avoiding conflicts with Flink core dependencies.

    • Only Flink 1.15 supports minimal submission of Flink Jar jobs.
    • For Flink-related dependencies, use the provided scope by adding <scope>provided</scope> in the dependencies, especially for non-Connector dependencies under the org.apache.flink group starting with flink-.
    • For dependencies related to Hadoop, Hive, Hudi, and MySQL-CDC, also use the provided scope by adding <scope>provided</scope> in the dependencies.
    • In the Flink source code, only methods marked with @Public or @PublicEvolving are intended for user invocation. DLI guarantees compatibility with these methods.

  7. Set compute resource specification parameters.

    Figure 3 Configuring job parameters

    DLI offers various resource configuration templates based on different Flink engine versions.

    Compared with the v1 template, the v2 template does not support the setting of the number of CUs. The v2 template supports the setting of Job Manager Memory and Task Manager Memory.

    v1: applicable to Flink 1.12, 1.13, and 1.15.

    v2: applicable to Flink 1.13, 1.15, and 1.17.

    You are advised to use the parameter settings of v2.

    For details about the parameters of v1, see Table 3.

    For details about the parameters of v2, see Table 4.

    Table 3 Parameter descriptions of v1

    Parameter

    Description

    CUs

    One CU consists of one vCPU and 4 GB of memory. The number of CUs ranges from 2 to 10000.

    NOTE:

    When Task Manager Config is selected, elastic resource pool queue management is optimized by automatically adjusting CUs to match Actual CUs after setting Slot(s) per TM.

    CUs = Actual number of CUs = max[Job Manager CPU + Task Manager CPU, (Job Manager Memory + Task Manager Memory/4)]

    • Job Manager CPU + Task Manager CPU = Actual TMs x CU(s) per TM + Job Manager CUs.
    • Job Manager Memory + Task Manager Memory = Actual TMs x Memory per TM + Job Manager Memory
    • If Slot(s) per TM is set, then: Actual TMs = Parallelism/Slot(s) per TM.
    • If Slot(s) per TM is not set, then: Actual TMs = (CUs – Job Manager CUs)/CU(s) per TM.
    • If Memory per TM and Job Manager Memory in the optimization parameters are not set, then: Memory per TM = CU(s) per TM x 4. Job Manager Memory = Job Manager CUs x 4.
    • The parallelism degree of Spark resources is jointly determined by the number of Executors and the number of Executor CPU cores.

    Job Manager CUs

    Number of management unit CUs.

    Parallelism

    Number of tasks concurrently executed by each operator in a job.

    NOTE:
    • The value cannot exceed four times the number of compute units (CUsJob Manager CUs).
    • Set this parameter to a value greater than that configured in the code to avoid job submission failures.

    Task Manager Config

    Whether TaskManager resource parameters are set

    • If this option is selected, you need to set the following parameters:
      • CU(s) per TM: Number of resources occupied by each TaskManager.
      • Slot(s) per TM: Number of slots contained in each TaskManager.
    • If not selected, the system automatically uses the default values.
      • CU(s) per TM: The default value is 1.
      • Slot(s) per TM: The default value is (Parallelism x CU(s) per TM)/(CUs – Job Manager CUs).

    Save Job Log

    Whether to save the job running logs to the OBS bucket.

    CAUTION:

    You are advised to select this parameter. Otherwise, no run logs will be generated after the job is executed. If the job runs abnormally later, you will be unable to obtain the run logs for troubleshooting.

    If this option is selected, you need to set the following parameters:

    OBS Bucket: Select an OBS bucket to store job logs. If the selected OBS bucket is not authorized, click Authorize.

    Enable Checkpointing

    Checkpoints are used to periodically save the job state. Enabling checkpointing allows for the quick recovery of a specific job state in case of system failure.

    There are two ways to enable checkpointing in DLI:

    • Configure checkpoint-related parameters in the job code, suitable for Flink 1.15 or earlier.
    • Enable checkpointing on the Jar job configuration page of the DLI management console, suitable for Flink 1.15 or later.

    For Flink 1.15, do not configure checkpoint-related parameters both in the job code and the Jar job configuration page. The configurations in the job code have higher priority. Duplicate configurations may lead to the use of incorrect checkpoint paths during abnormal restarts, causing recovery failures or data inconsistencies.

    After selecting Enable Checkpointing, set the following parameters to enable checkpointing:
    • Checkpoint Interval: The interval between checkpoints, in seconds.
    • Checkpoint Mode: Select a mode for checkpoints. The options are:
      • At least once: Events are processed at least once.
      • Exactly once: Events are processed only once.
    CAUTION:
    • After selecting Enable Checkpointing, you need to set OBS Bucket to save the checkpoint information. The default checkpoint save path is Bucket name/jobs/checkpoint/Directory with job ID prefix.
    • Once checkpointing is enabled, do not set checkpoint parameters in the job code, as the parameters configured in the job code have a higher priority than those configured on the job configuration page. Duplicate configurations may cause the job to use incorrect checkpoint paths during abnormal restarts, resulting in recovery failures or data inconsistencies.
    • After enabling checkpointing, if Auto Restart on Exception and Restore Job from Checkpoint are both selected, you do not need to set Checkpoint Path. The system will automatically determine the path based on the Enable Checkpointing configuration.

    Alarm on Job Exception

    Whether to notify users of any job exceptions, such as running exceptions or arrears, via SMS or email.

    If this option is selected, you need to set the following parameters:

    SMN Topic

    Select a custom SMN topic. For how to create a custom SMN topic, see Creating a Topic.

    Auto Restart upon Exception

    Whether automatic restart is enabled. If enabled, jobs will be automatically restarted and restored when exceptions occur.

    If this option is selected, you need to set the following parameters:

    • Max. Retry Attempts: maximum number of retries upon an exception. The unit is times/hour.
      • Unlimited: The number of retries is unlimited.
      • Limited: The number of retries is user-defined.
    • Restore Job from Checkpoint: Restore the job from the saved checkpoint.

      If you select this parameter, you also need to set Checkpoint Path.

      Checkpoint Path: Select a path for storing checkpoints. This path must match that configured in the application package. Each job must have a unique checkpoint path, or, you will not be able to obtain the checkpoint.

      NOTE:
      • If you also select Enable Checkpointing, you do not need to set Checkpoint Path. The system will automatically determine the path based on the Enable Checkpointing configuration.
      • If you do not select Enable Checkpointing, you need to set Checkpoint Path.
    Table 4 Parameter descriptions of v2

    Parameter

    Description

    Parallelism

    Number of tasks concurrently executed by each operator in a job.

    NOTE:
    • The minimum parallelism must not be less than 1. The default value is 1.
    • This value cannot be greater than four times the compute units (CUsJob Manager CUs).

    Job Manager CPU

    Number of vCPUs available for JobManager.

    The default value is 1. The minimum value cannot be less than 0.5.

    Job Manager Memory

    Memory available for JobManager.

    The default value is 4 GB. The minimum size cannot be less than 2 GB (2,048 MB). The default unit is GB, which can be set to GB or MB.

    Task Manager CPU

    Number of vCPUs available for TaskManager.

    The default value is 1. The minimum value cannot be less than 0.5.

    Task Manager Memory

    Memory available for TaskManager.

    The default value is 4 GB. The minimum size cannot be less than 2 GB (2,048 MB). The default unit is GB, which can be set to GB or MB.

    Slot(s) per TM

    Number of parallel tasks that a single TaskManager can support. Each task slot can execute one task in parallel. Increasing task slots enhances the parallel processing capacity of TaskManager but also increases resource consumption.

    The number of task slots is linked to the CPU count of TaskManager since each CPU can offer one task slot.

    By default, a single TM slot is set to 1. The minimum parallelism must not be less than 1.

    Save Job Log

    Whether to save the job running logs to the OBS bucket.

    CAUTION:

    You are advised to select this parameter. Otherwise, no run logs will be generated after the job is executed. If the job runs abnormally later, you will be unable to obtain the run logs for troubleshooting.

    If this option is selected, you need to set the following parameters:

    OBS Bucket: Select an OBS bucket to store job logs. If the selected OBS bucket is not authorized, click Authorize.

    Enable Checkpointing

    Checkpoints are used to periodically save the job state. Enabling checkpointing allows for the quick recovery of a specific job state in case of system failure.

    There are two ways to enable checkpointing in DLI:

    • Configure checkpoint-related parameters in the job code, suitable for Flink 1.15 or earlier.
    • Enable checkpointing on the Jar job configuration page of the DLI management console, suitable for Flink 1.15 or later.

    For Flink 1.15, do not configure checkpoint-related parameters both in the job code and the Jar job configuration page. The configurations in the job code have higher priority. Duplicate configurations may lead to the use of incorrect checkpoint paths during abnormal restarts, causing recovery failures or data inconsistencies.

    After selecting Enable Checkpointing, set the following parameters to enable checkpointing:
    • Checkpoint Interval: The interval between checkpoints, in seconds.
    • Checkpoint Mode: Select a mode for checkpoints. The options are:
      • At least once: Events are processed at least once.
      • Exactly once: Events are processed only once.
    CAUTION:
    • After selecting Enable Checkpointing, you need to set OBS Bucket to save the checkpoint information. The default checkpoint save path is Bucket name/jobs/checkpoint/Directory with job ID prefix.
    • Once checkpointing is enabled, do not set checkpoint parameters in the job code, as the parameters configured in the job code have a higher priority than those configured on the job configuration page. Duplicate configurations may cause the job to use incorrect checkpoint paths during abnormal restarts, resulting in recovery failures or data inconsistencies.
    • After enabling checkpointing, if Auto Restart on Exception and Restore Job from Checkpoint are both selected, you do not need to set Checkpoint Path. The system will automatically determine the path based on the Enable Checkpointing configuration.

    OBS Bucket

    OBS bucket to store job logs and checkpoint information. If the OBS bucket you selected is unauthorized, click Authorize.

    Alarm on Job Exception

    Whether to notify users of any job exceptions, such as running exceptions or arrears, via SMS or email.

    If this option is selected, you need to set the following parameters:

    SMN Topic

    Select a custom SMN topic. For how to create a custom SMN topic, see Creating a Topic.

    Auto Restart upon Exception

    Whether automatic restart is enabled. If enabled, jobs will be automatically restarted and restored when exceptions occur.

    If this option is selected, you need to set the following parameters:

    • Max. Retry Attempts: maximum number of retries upon an exception. The unit is times/hour.
      • Unlimited: The number of retries is unlimited.
      • Limited: The number of retries is user-defined.
    • Restore Job from Checkpoint: Restore the job from the saved checkpoint.

      If you select this parameter, you also need to set Checkpoint Path.

      Checkpoint Path: Select a path for storing checkpoints. This path must match that configured in the application package. Each job must have a unique checkpoint path, or, you will not be able to obtain the checkpoint.

      NOTE:
      • If you also select Enable Checkpointing, you do not need to set Checkpoint Path. The system will automatically determine the path based on the Enable Checkpointing configuration.
      • If you do not select Enable Checkpointing, you need to set Checkpoint Path.

    You can set compute resource specification parameters on the Runtime Configuration tab of Flink jobs, and the parameter values have a higher priority than the specified values.

    Table 5 describes the parameter mapping.

    In Flink 1.12, you are advised to set compute resource specification parameters based on the configuration method on the console. Using custom parameter settings may result in discrepancies in actual CU statistics.

    Table 5 Mapping between compute resource specification parameters on the console and those in the Runtime Configuration

    Runtime Configuration

    Compute Resource Specification Parameter of v1

    Compute Resource Specification Parameter of v2

    Description

    kubernetes.jobmanager.cpu

    Job Manager CUs

    Job Manager CPU

    Number of vCPUs available for JobManager.

    The default value is 1. The minimum value cannot be less than 0.5.

    kubernetes.taskmanager.cpu

    CU(s) per TM

    Task Manager CPU

    Number of vCPUs available for TaskManager.

    The default value is 1. The minimum value cannot be less than 0.5.

    jobmanager.memory.process.size

    -

    Job Manager Memory

    Memory available for JobManager.

    The default value is 4 GB. The minimum size cannot be less than 2 GB (2,048 MB). The default unit is GB, which can be set to GB or MB.

    taskmanager.memory.process.size

    -

    Task Manager Memory

    Memory available for TaskManager.

    The default value is 4 GB. The minimum size cannot be less than 2 GB (2,048 MB). The default unit is GB, which can be set to GB or MB.

  8. Click Save in the upper right of the page.
  9. Click Start in the upper right corner. On the displayed Start Flink Job page, confirm the job specifications and the price, and click Start Now to start the job. After the job is started, the system automatically switches to the Flink Jobs page, and the created job is displayed in the job list. You can view the job status in the Status column.

    • Once a job is successfully submitted, its status changes from Submitting to Running. After the execution is complete, the status changes to Completed.
    • If the job status is Submission failed or Running exception, the job fails to submit or run. In this case, you can hover over the status icon in the Status column of the job list to view the error details. You can click to copy these details. Rectify the fault based on the error information and resubmit the job.