هذه الصفحة غير متوفرة حاليًا بلغتك المحلية. نحن نعمل جاهدين على إضافة المزيد من اللغات. شاكرين تفهمك ودعمك المستمر لنا.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

CS Job

Updated on 2022-02-22 GMT+08:00

Functions

The CS Job node is used to execute a predefined Cloud Stream Service (CS) job for real-time analysis of streaming data.

Context

This node enables you to start a CS job or query whether a CS job is running. If you do not select an existing real-time job, DLF creates and starts the job based on the job status configured on the node. You can customize jobs and use DLF job parameters.

Parameters

Table 1 and Table 2 describe the parameters of the CS Job node.

Table 1 Parameters of CS Job nodes

Parameter

Mandatory

Description

Job Type

Yes

Select a job type for CS.

  • Existing CS job
  • Flink SQL job
  • User-defined Flink job
  • User-defined Spark job

Existing CS job

Streaming Job Name

Yes

Name of the CS job to be executed.

To create a CS job, you can use either of the following methods:
  • Click . On the Data Integration page of DLF, create a CS job.
  • Go to the CS console to create a CS job.

Node Name

Yes

Name of the node. Must consist of 1 to 128 characters and contain only letters, digits, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Flink SQL job

SQL Script

Yes

Path to a script to be executed. If the script is not created, create and develop the script by referring to Creating a Script and Developing an SQL Script.

Script Parameter

Yes

If the associated SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be a built-in function and EL expression. For details about built-in functions and EL expressions, see Expression Overview.

CloudStream Cluster

Yes

Name of the CS cluster. To create a CS cluster, go to the CS console.

SPUs

Yes

1 SPU = 1 core and 4 GB memory

Parallelism

Yes

Number of tasks that run CS jobs at the same time. You are advised to set this parameter to 1 or 2 times of the SPU.

UDF JAR

No

After the JAR package is imported, you can use SQL statements to call the custom functions in the package. You need to upload the JAR package to the OBS bucket.

Auto Restart upon Exception

No

If you enable this function, the system automatically restarts and restores abnormal CS jobs upon job exceptions.

Streaming Job Name

Yes

Name of the Flink SQL job. The name is 1 to 57 characters long and consists only of letters, digits, hyphens (-), and underlines (_).

Node Name

Yes

Name of the node. Must consist of 1 to 128 characters and contain only letters, digits, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

User-defined Flink job

JAR Package Path

Yes

The JAR package path can be selected only after you have uploaded the custom JAR package to the OBS bucket.

Main Class

No

Name of the main class in the JAR file to be uploaded, for example, KafkaMessageStreaming. If this parameter is not specified, the main class name is determined based on the Manifest file in the JAR file.

Main Class Parameter

No

List of parameters for the main class. Parameters are separated by spaces, for example, test tmp/result.txt.

CloudStream Cluster

Yes

Name of the CS cluster. To create a CS cluster, go to the CS console.

SPUs

Yes

1 SPU = 1 core and 4 GB memory

Driver SPU

Yes

Number of SPUs used for each driver node.

Parallelism

Yes

Number of tasks that run CS jobs at the same time. You are advised to set this parameter to 1 or 2 times of the SPU.

Auto Restart upon Exception

No

If you enable this function, the system automatically restarts and restores abnormal CS jobs upon job exceptions.

Streaming Job Name

Yes

Name of the user-defined Flink job. The name is 1 to 57 characters long and consists only of letters, digits, hyphens (-), and underlines (_).

Node Name

Yes

Name of the node. Must consist of 1 to 128 characters and contain only letters, digits, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

User-defined Spark job

JAR Package Path

Yes

The JAR package path can be selected only after you have uploaded the custom JAR package to the OBS bucket.

Main Class

No

Name of the main class in the JAR file to be uploaded, for example, KafkaMessageStreaming. If this parameter is not specified, the main class name is determined based on the Manifest file in the JAR file.

Main Class Parameter

No

List of parameters for the main class. Parameters are separated by spaces, for example, test tmp/result.txt.

CloudStream Cluster

Yes

Name of the CS cluster. To create a CS cluster, go to the CS console.

SPUs

Yes

1 SPU = 1 core and 4 GB memory

Driver SPU

Yes

Number of SPUs used for each driver node.

Executors

Yes

Number of the executor nodes.

Executor SPUs

Yes

Number of SPUs used for each executor node.

Auto Restart upon Exception

No

If you enable this function, the system automatically restarts and restores abnormal CS jobs upon job exceptions.

Streaming Job Name

Yes

Name of the user-defined Spark job. The name is 1 to 57 characters long and consists only of letters, digits, hyphens (-), and underlines (_).

Node Name

Yes

Name of the node. Must consist of 1 to 128 characters and contain only letters, digits, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).

Table 2 Advanced parameters

Parameter

Mandatory

Description

Node Status Polling Interval (s)

Yes

Specifies how often the system check completeness of the node task. The value ranges from 1 to 60 seconds.

Max. Node Execution Duration

Yes

Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will not be retried and is set to the failed state.

Retry upon Failure

Yes

Indicates whether to re-execute a node task if its execution fails. Possible values:

  • Yes: The node task will be re-executed, and the following parameters must be configured:
    • Maximum Retries
    • Retry Interval (seconds)
  • No: The node task will not be re-executed. This is the default setting.
NOTE:

If Timeout Interval is configured for the node, the node will not be executed again after the execution times out. Instead, the node is set to the failure state.

Failure Policy

Yes

Operation that will be performed if the node task fails to be executed. Possible values:

  • End the current job execution plan
  • Go to the next job
  • Suspend the current job execution plan
  • Suspend execution plans of the current and subsequent nodes

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback