Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Creating a Flink OpenSource SQL Job

Updated on 2025-02-14 GMT+08:00

This section describes how to create a Flink OpenSource SQL job.

DLI Flink OpenSource SQL jobs are fully compatible with the syntax of Flink provided by the community. In addition, Redis and GaussDB(DWS) data source types are added based on the community connector. For the syntax and constraints of Flink SQL DDL, DML, and functions, see Table API & SQL.

Prerequisites

Precautions

Before creating jobs and submitting tasks, you are advised to enable CTS to record DLI operations for queries, audits, and tracking. Using CTS to Audit DLI lists DLI operations that can be recorded by CTS.

For how to enable CTS and view trace details, see the Cloud Trace Service Getting Started.

Creating a Flink OpenSource SQL Job

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. In the upper right corner of the Flink Jobs page, click Create Job.

    Figure 1 Creating a Flink OpenSource SQL job

  3. Set job parameters.

    Table 1 Job parameters

    Parameter

    Description

    Type

    Set Type to Flink OpenSource SQL. You will need to start jobs by compiling SQL statements.

    Name

    Job name. Enter 1 to 57 characters. Only letters, numbers, hyphens (-), and underscores (_) are allowed.

    NOTE:

    The job name must be globally unique.

    Description

    Description of a job. It can contain a maximum of 512 characters.

    Template Name

    You can select a sample template or a custom job template. For details about templates, see Managing Flink Job Templates.

    Tags

    Tags used to identify cloud resources. A tag includes the tag key and tag value. If you want to use the same tag to identify multiple cloud resources, that is, to select the same tag from the drop-down list box for all services, you are advised to create predefined tags on the Tag Management Service (TMS).

    If your organization has configured tag policies for DLI, add tags to resources based on the policies. If a tag does not comply with the tag policies, resource creation may fail. Contact your organization administrator to learn more about tag policies.

    For details, see Tag Management Service User Guide.

    NOTE:
    • A maximum of 20 tags can be added.
    • Only one tag value can be added to a tag key.
    • The key name in each resource must be unique.
    • Tag key: Enter a tag key name in the text box.
      NOTE:

      A tag key can contain a maximum of 128 characters. Only letters, numbers, spaces, and special characters (_.:+-@) are allowed, but the value cannot start or end with a space or start with _sys_.

    • Tag value: Enter a tag value in the text box.
      NOTE:

      A tag value can contain a maximum of 255 characters. Only letters, numbers, spaces, and special characters (_.:+-@) are allowed.

  4. Click OK to enter the editing page.
  5. Edit an OpenSource SQL job.

    Enter detailed SQL statements in the statement editing area. For details about SQL statements, see the Data Lake Insight Flink OpenSource SQL Syntax Reference.

  6. Click Check Semantics.

    • You can Start a job only after the semantic verification is successful.
    • If verification is successful, the message "The SQL semantic verification is complete. No error." will be displayed.
    • If verification fails, a red "X" mark will be displayed in front of each SQL statement that produced an error. You can move the cursor to the "X" mark to view error details and change the SQL statement as prompted.
    NOTE:

    Flink 1.15 does not support syntax verification.

  7. Set job running parameters.

    Figure 2 Setting running parameters for Flink OpenSource SQL
    Table 2 Running parameters

    Parameter

    Description

    Queue

    Select a queue to run the job.

    UDF Jar

    UDF JAR file, which contains UDFs that can be called in subsequent jobs.

    There are the following ways to manage UDF JAR files:

    • Upload packages to OBS: Upload Jar packages to an OBS bucket in advance and select the corresponding OBS path.
    • Upload packages to DLI: Upload JAR files to an OBS bucket in advance and create a package on the Data Management > Package Management page of the DLI management console. For details, see Creating a DLI Package.

    For Flink 1.15 or later, only OBS packages can be selected when creating jobs, and DLI packages are not supported.

    Flink Version

    Flink version used for job running. Flink versions have varying feature support.

    If you choose to use Flink 1.15, make sure to configure the agency information for the cloud service that DLI is allowed to access in the job.

    For the syntax of Flink 1.15, see Flink OpenSource SQL 1.15 Usage and Flink OpenSource SQL 1.15 Syntax.

    For the syntax of Flink 1.12, see Flink OpenSource SQL 1.12 Syntax.

    NOTE:

    You are advised not to use Flink of different versions for a long time.

    • Doing so can lead to code incompatibility, which can negatively impact job execution efficiency.
    • Doing so may result in job execution failures due to conflicts in dependencies. Jobs rely on specific versions of libraries or components.

    Agency

    If you choose Flink 1.15 to execute your job, you can create a custom agency to allow DLI to access other services.

    CUs

    Sum of the number of compute units and job manager CUs of DLI. CU is also the billing unit of DLI. One CU equals 1 vCPU and 4 GB.

    The value is the number of CUs required for job running and cannot exceed the number of CUs in the bound queue.

    NOTE:

    When Task Manager Config is selected, elastic resource pool queue management is optimized by automatically adjusting CUs to match Actual CUs after setting Slot(s) per TM.

    CUs = Actual number of CUs = max[Job Manager CPUs + Task Manager CPU, (Job Manager Memory + Task Manager Memory/4)]

    • Job Manager CPUs + Task Manager CPUs = Actual TMs x CU(s) per TM + Job Manager CUs.
    • Job Manager Memory + Task Manager Memory = Actual TMs x Memory per TM + Job Manager Memory
    • If Slot(s) per TM is set, then: Actual TMs = Parallelism/Slot(s) per TM.
    • If Slot(s) per TM is not set, then: Actual TMs = (CUs – Job Manager CUs)/CU(s) per TM.
    • If Memory per TM and Job Manager Memory in the optimization parameters are not set, then: Memory per TM = CU(s) per TM x 4. Job Manager Memory = Job Manager CUs x 4.
    • The parallelism degree of Spark resources is jointly determined by the number of Executors and the number of Executor CPU cores.

    Job Manager CUs

    Number of CUs of the management unit.

    Parallelism

    Number of tasks concurrently executed by each operator in a job.

    NOTE:

    This value cannot be greater than four times the compute units (number of CUs minus the number of job manager CUs).

    Task Manager Config

    Whether Task Manager resource parameters are set

    • If selected, you need to set the following parameters:
      • CU(s) per TM: Number of resources occupied by each Task Manager.
      • Slot(s) per TM: Number of slots contained in each Task Manager.
    • If not selected, the system automatically uses the default values.
      • CU(s) per TM: The default value is 1.
      • Slot(s) per TM: The default value is (Parallelism x CU(s) per TM)/(CUs – Job Manager CUs).

    OBS Bucket

    OBS bucket to store job logs and checkpoint information. If the OBS bucket you selected is unauthorized, click Authorize.

    Save Job Log

    Whether job running logs are saved to OBS. The logs are saved in the following path: Bucket name/jobs/logs/Directory starting with the job ID.

    CAUTION:

    You are advised to configure this parameter. Otherwise, no run log is generated after the job is executed. If the job fails, the run log cannot be obtained for fault locating.

    If this option is selected, you need to set the following parameters:

    OBS Bucket: Select an OBS bucket to store job logs. If the OBS bucket you selected is unauthorized, click Authorize.
    NOTE:

    If Enable Checkpointing and Save Job Log are both selected, you only need to authorize OBS once.

    Alarm on Job Exception

    Whether to notify users of any job exceptions, such as running exceptions or arrears, via SMS or email.

    If this option is selected, you need to set the following parameters:

    SMN Topic

    Select a custom SMN topic. For how to create a custom SMN topic, see Creating a Topic.

    Enable Checkpointing

    Whether to enable job snapshots. If this function is enabled, jobs can be restored based on the checkpoints.

    If this option is selected, you need to set the following parameters:
    • Checkpoint Interval: interval for creating checkpoints, in seconds. The value ranges from 1 to 999999, and the default value is 30.
    • Checkpoint Mode can be set to either of the following values:
      • At least once: Events are processed at least once.
      • Exactly once: Events are processed only once.

    If you select Enable Checkpointing, you also need to set OBS Bucket.

    OBS Bucket: Select an OBS bucket to store your checkpoints. If the OBS bucket you selected is unauthorized, click Authorize.

    The checkpoint path is Bucket name/jobs/checkpoint/Directory starting with the job ID.
    NOTE:

    If Enable Checkpointing and Save Job Log are both selected, you only need to authorize OBS once.

    Auto Restart upon Exception

    Whether automatic restart is enabled. If enabled, jobs will be automatically restarted and restored when exceptions occur.

    If this option is selected, you need to set the following parameters:

    • Max. Retry Attempts: maximum number of retries upon an exception. The unit is times/hour.
      • Unlimited: The number of retries is unlimited.
      • Limited: The number of retries is user-defined.
    • Restore Job from Checkpoint: This parameter is available only when Enable Checkpointing is selected.

    Idle State Retention Time

    Clears intermediate states of operators such as GroupBy, RegularJoin, Rank, and Depulicate that have not been updated after the maximum retention time. The default value is 1 hour.

    Dirty Data Policy

    Policy for processing dirty data. The following policies are supported: Ignore, Trigger a job exception, and Save.

    If you set this field to Save, the Dirty Data Dump Address must be set. Click the address box to select the OBS path for storing dirty data.

    This parameter is available only when a DIS data source is used.

  8. (Optional) Set the runtime configuration as required. For details about related parameters, seeHow Do I Optimize Performance of a Flink Job?

    Figure 3 Runtime configuration

  9. Click Save.
  10. Click Start. On the displayed Start Flink Jobs page, confirm the job specifications and the price, and click Start Now to start the job.

    After the job is started, the system automatically switches to the Flink Jobs page, and the created job is displayed in the job list. You can view the job status in the Status column. Once a job is successfully submitted, its status changes from Submitting to Running. After the execution is complete, the status changes to Completed.

    If the job status is Submission failed or Running exception, the job fails to submit or run. In this case, you can hover over the status icon in the Status column of the job list to view the error details. You can click to copy these details. Rectify the fault based on the error information and resubmit the job.

    NOTE:

    Other buttons are as follows:

    • Save As: Save the created job as a new job.
    • Static Stream Graph: Provide the static concurrency estimation function and stream graph display function. See Figure 5.
    • Simplified Stream Graph: Display the data processing flow from the source to the sink. See Figure 4.
    • Format: Format the SQL statements in the editing box.
    • Set as Template: Set the created SQL statements as a job template.
    • Theme Settings: Set the theme related parameters, including Font Size, Wrap, and Page Style.
    • Help: Redirect to the help center to provide you with the SQL syntax for stream jobs.

Simplified Stream Graph

On the OpenSource SQL job editing page, click Simplified Stream Graph.

NOTE:

Simplified stream graph viewing is only supported in Flink 1.12 and Flink 1.10.

Figure 4 Simplified stream graph

Static Stream Graph

On the OpenSource SQL job editing page, click Static Stream Graph.

NOTE:
  • Simplified stream graph viewing is only supported in Flink 1.12 and Flink 1.10.
  • If you use a UDF in a Flink OpenSource SQL job, it is not possible to generate a static stream graph.

The Static Stream Graph page also allows you to:

  • Estimate concurrencies. Click Estimate Concurrencies on the Static Stream Graph page to estimate concurrencies. Click Restore Initial Value to restore the initial value after concurrency estimation.
  • Zoom in or out the page.
  • Expand or merge operator chains.
  • You can edit Parallelism, Output rate, and Rate factor.
    • Parallelism: indicates the number of concurrent tasks.
    • Output rate: indicates the data traffic of an operator. The unit is piece/s.
    • Rate factor: indicates the retention rate after data is processed by operators. Rate factor = Data output volume of an operator/Data input volume of the operator (Unit: %)
Figure 5 Static stream graph

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback