Bu sayfa henüz yerel dilinizde mevcut değildir. Daha fazla dil seçeneği eklemek için yoğun bir şekilde çalışıyoruz. Desteğiniz için teşekkür ederiz.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Creating a Flink SQL job

Updated on 2022-12-07 GMT+08:00

This section describes how to create a Flink SQL job. You can use Flink SQLs to develop jobs to meet your service requirements. Using SQL statements simplifies logic implementation. You can edit Flink SQL statements for your job in the DLI SQL editor. This section describes how to use the SQL editor to write Flink SQL statements.

Prerequisites

  • You have prepared the data input and data output channels. For details, see Preparing Flink Job Data.
  • When you use a Flink SQL job to access other external data sources, such as OpenTSDB, HBase, Kafka, DWS, RDS, CSS, CloudTable, DCS Redis, and DDS MongoDB, you need to create a cross-source connection to connect the job running queue to the external data source.

Creating a Flink SQL Job

  1. In the left navigation pane of the DLI management console, choose Job Management > Flink Jobs. The Flink Jobs page is displayed.
  2. In the upper right corner of the Flink Jobs page, click Create Job.
  3. Specify job parameters.

    Table 1 Job configuration information

    Parameter

    Description

    Type

    Set Type to Flink SQL. You will need to rewrite SQL statements to start the job.

    Name

    Name of a job. Enter 1 to 57 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

    NOTE:

    The job name must be globally unique.

    Description

    Description of a job. It can contain up to 512 characters.

    Template Name

    You can select a sample template or a custom job template. For details about templates, see Flink Template Management.

  4. Click OK to enter the Edit page.
  5. Edit a Flink SQL job.

    Enter details SQL statements in the statement editing area. For details about SQL syntax, see the Data Lake Insight SQL Syntax Reference.

  6. Click Check Semantics.

    • You can Debug or Start a job only after the semantic verification is successful.
    • If verification is successful, the message "The SQL semantic verification is complete. No error." will be displayed.
    • If verification fails, a red "X" mark will be displayed in front of each SQL statement that produced an error. You can move the cursor to the "X" mark to view error details and change the SQL statement as prompted.

  7. Set job running parameters.

    Table 2 Running parameters

    Parameter

    Description

    Queue

    A shared queue is selected by default. You can select a custom queue as needed.

    NOTE:
    • During job creation, a sub-user can only select a queue that has been allocated to the user.
    • If the remaining capacity of the selected queue cannot meet the job requirements, the system automatically scales up the capacity. When a queue is idle, the system automatically scales in the queue.

    UDF Jar

    If you selected custom queues, you need to configure this parameter.

    You can customize a UDF Jar file. Before you select a JAR file, upload the corresponding JAR package to the OBS bucket and choose Data Management > Package Management to create a package. For details, see Creating a Package.

    In SQL, you can call a user-defined function that is inserted into a JAR file.

    CUs

    Sum of the number of compute units and job manager CUs of DLI. One CU equals one vCPU and 4 GB.

    The configured number of CUs is the number of CUs required for job running and cannot exceed the number of CUs in the bound queue.

    Job Manager CUs

    Number of CUs of the management unit.

    Parallelism

    Number of Flink SQL jobs that run at the same time Properly increasing the number of parallel threads improves the overall computing capability of the job. However, the switchover overhead caused by the increase of threads must be considered.

    NOTE:
    • This value cannot be greater than four times the compute units (number of CUs minus the number of job manager CUs).
    • The priority of the number of parallel tasks on this page is lower than that set in the code.

    Task Manager Configuration

    Whether to set Task Manager resource parameters

    If this option is selected, you need to set the following parameters:

    • CU(s) per TM: Number of resources occupied by each Task Manager.
    • Slot(s) per TM: Number of slots contained in each Task Manager.

    OBS Bucket

    OBS bucket to store job logs and checkpoint information. If the selected OBS bucket is not authorized, click Authorize.

    NOTE:

    If both Enable Checkpointing and Save Job Log are selected, you only need to authorize OBS once.

    Save Job Log

    Whether to save the job running logs to OBS The logs are saved in the following path: Bucket name/jobs/logs/Directory starting with the job ID. To go to this path, go to the job list and click the job name. On the Run Log tab page, click the provided OBS link.

    CAUTION:

    You are advised to select this parameter. Otherwise, no run log is generated after the job is executed. If the job is abnormal, the run log cannot be obtained for fault locating.

    If this option is selected, you need to set the following parameters:

    OBS Bucket: Select an OBS bucket to store user job logs. If the selected OBS bucket is not authorized, click Authorize.
    NOTE:

    If both Enable Checkpointing and Save Job Log are selected, you only need to authorize OBS once.

    Alarm Generation upon Job Exception

    Whether to report job exceptions, for example, abnormal job running or exceptions due to an insufficient balance, to users via SMS or email

    If this option is selected, you need to set the following parameters:

    SMN Topic

    Select a user-defined SMN topic. For details about how to customize SMN topics, see Creating a Topic in the Simple Message Notification User Guide.

    Enable Checkpointing

    Whether to enable job snapshots. If this function is enabled, jobs can be restored based on the checkpoints.

    If this option is selected, you need to set the following parameters:
    • Checkpoint Interval indicates the interval for creating checkpoints. The value ranges from 1 to 999999, and the default value is 30.
    • Checkpoint Mode can be set to either of the following values:
      • At least once: Events are processed at least once.
      • Exactly once: Events are processed only once.
    • OBS Bucket: Select an OBS bucket to store your checkpoints. If the selected OBS bucket is not authorized, click Authorize.
      The checkpoint path is Bucket name/jobs/checkpoint/Directory starting with the job ID.
      NOTE:

      You only need to authorize OBS once for both Enable Checkpointing and Save Job Log.

    Auto Restart upon Exception

    Whether to enable automatic restart. If this function is enabled, any job that has become abnormal will be automatically restarted.

    If this option is selected, you need to set the following parameters:

    • Max. Retry Attempts: maximum number of retry times upon an exception. The unit is Times/hour.
      • Unlimited: The number of retries is unlimited.
      • Limited: The number of retries is user-defined.
    • Restore Job from Checkpoint: This parameter is available only when Enable Checkpointing is selected.

    Idle State Retention Time

    Defines for how long the state of a key is retained without being updated before it is removed in GroupBy or Window. The default value is 1 hour.

    Dirty Data Policy

    Select a policy for processing dirty data. The following policies are supported: Ignore, Trigger a job exception, and Save.

    NOTE:

    Save indicates that the dirty data is stored to the OBS bucket selected above.

    Dirty Data Dump Address

    Set this parameter when Dirty Data Policy is set to Save. Click the address box to select the OBS path for storing dirty data.

  8. (Optional) Debug parameters as required. The job debugging function is used only to verify the SQL logic and does not involve data write operations. For details, see Debugging a Flink Job.
  9. (Optional) Set the runtime configuration as required. Set Custom Configuration to User-defined.
  10. Click Save.
  11. Click Start. On the displayed Start Flink Jobs page, confirm the job specifications, and click Start Now to start the job.

    After the job is started, the system automatically switches to the Flink Jobs page, and the created job is displayed in the job list. You can view the job status in the Status column. After a job is successfully submitted, the job status will change from Submitting to Running. After the execution is complete, the message Completed is displayed.

    If the job status is Submission failed or Running exception, the job submission failed or the job did not execute successfully. In this case, you can move the cursor over the status icon in the Status column of the job list to view the error details. You can click to copy error information. After handling the fault based on the provided information, resubmit the job.

    NOTE:

    Other available buttons are as follows:

    • Save As: Save the created job as a new job.
    • Debug: Perform job debugging. For details, see Debugging a Flink Job.
    • Format: Format the SQL statements in the editing box.
    • Set as Template: Set the created SQL statements as a job template.
    • Theme Settings: Set the theme related parameters, including Font Size, Wrap, and Page Style.

Sitemizi ve deneyiminizi iyileştirmek için çerezleri kullanırız. Sitemizde tarama yapmaya devam ederek çerez politikamızı kabul etmiş olursunuz. Daha fazla bilgi edinin

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback