Data Lake Insight
Data Lake Insight
All results for "
" in this service
All results for "
" in this service
What's New
Function Overview
Product Bulletin
Product Bulletin
EOL Announcement for DLI Yearly/Monthly and Pay-per-Use Queues as Well as Queue CUH Packages
EOS Announcement for DLI Flink 1.10 and Flink 1.11
EOS Announcement for DLI Spark 2.3.2
EOS Announcement for DLI Flink 1.7
Version Support Bulletin
Lifecycle of DLI Compute Engine Versions
What's New in Flink 1.15
What's New in Flink 1.12
What's New in Spark 3.3.1
What's New in Spark 3.1.1
What's New in Spark 2.4.5
Service Overview
Infographics
What Is Data Lake Insight
Advantages
Application Scenarios
Notes and Constraints
Specifications
Security
Shared Responsibilities
Asset Identification and Management
Identity Authentication and Access Control
Data Protection Technologies
Audit and Logging
Service Resilience
Security Risk Monitoring
Recovery from Failures
Update Management
Certificates
Permissions Management
Quotas
Related Services
Basic Concepts
Billing
Billing Overview
Billing for Compute Resources
Elastic Resource Pools
Billing for Storage Resources
Billing for Scanned Data
Package Billing
Billing Examples
Renewing Subscriptions
Bills
Arrears
Billing Termination
Billing FAQ
What Billing Modes Does DLI Offer?
When Is a Data Lake Queue Idle?
How Do I Troubleshoot DLI Billing Issues?
Why Am I Still Being Billed on a Pay-per-Use Basis After I Purchased a Package?
How Do I View the Usage of a Package?
How Do I View a Job's Scanned Data Volume?
Would a Pay-Per-Use Elastic Resource Pool Not Be Billed if No Job Is Submitted for Execution?
Do I Need to Pay Extra Fees for Purchasing a Queue Billed Based on the Scanned Data Volume?
How Is the Usage Beyond the Package Limit Billed?
What Are the Actual CUs, CU Range, and Specifications of an Elastic Resource Pool?
Change History
Getting Started
Using DLI to Submit a SQL Job to Query OBS Data
Using DLI to Submit a SQL Job to Query RDS for MySQL Data
Using DLI to Submit a Flink OpenSource SQL Job to Query RDS for MySQL Data
Using DLI to Submit a Flink Jar Job
Using DLI to Submit a Spark Jar Job
Practices
User Guide
DLI Job Development Process
Preparations
Configuring DLI Agency Permissions
Creating an IAM User and Granting Permissions
Configuring a DLI Job Bucket
Creating an Elastic Resource Pool and Queues Within It
Overview of DLI Elastic Resource Pools and Queues
Creating an Elastic Resource Pool and Creating Queues Within It
Managing Elastic Resource Pools
Viewing Basic Information
Managing Permissions
Binding a Queue
Setting CUs
Modifying Specifications
Managing Tags
Adjusting Scaling Policies for Queues in an Elastic Resource Pool
Viewing Scaling History
Allocating to an Enterprise Project
Managing Queues
Queue Permission Management
Allocating a Queue to an Enterprise Project
Creating an SMN Topic
Managing Queue Tags
Setting Queue Properties
Testing Address Connectivity
Deleting a Queue
Auto Scaling of Standard Queues
Setting a Scheduled Auto Scaling Task for a Standard Queue
Changing the CIDR Block for a Standard Queue
Example Use Case: Creating an Elastic Resource Pool and Running Jobs
Example Use Case: Configuring Scaling Policies for Queues in an Elastic Resource Pool
Creating a Non-Elastic Resource Pool Queue (Discarded and Not Recommended)
Creating Databases and Tables
Understanding Data Catalogs, Databases, and Tables
Creating a Database and Table on the DLI Console
Viewing Table Metadata
Managing Database Resources on the DLI Console
Configuring Database Permissions on the DLI Console
Deleting a Database on the DLI Console
Changing the Database Owner on the DLI Console
Managing Tags
Managing Table Resources on the DLI Console
Configuring Table Permissions on the DLI Console
Deleting a Table on the DLI Console
Changing the Table Owner on the DLI Console
Importing OBS Data to DLI
Exporting DLI Table Data to OBS
Previewing Table Data on the DLI Console
Data Migration and Transmission
Overview
Migrating Data from External Data Sources to DLI
Overview of Data Migration Scenarios
Using CDM to Migrate Data to DLI
Example Typical Scenario: Migrating Data from Hive to DLI
Example Typical Scenario: Migrating Data from Kafka to DLI
Example Typical Scenario: Migrating Data from Elasticsearch to DLI
Example Typical Scenario: Migrating Data from RDS to DLI
Example Typical Scenario: Migrating Data from GaussDB(DWS) to DLI
Configuring DLI to Read and Write Data from and to External Data Sources
Configuring DLI to Read and Write External Data Sources
Configuring the Network Connection Between DLI and Data Sources (Enhanced Datasource Connection)
Overview
Creating an Enhanced Datasource Connection
Establishing a Network Connection Between DLI and Resources in a Shared VPC
Common Development Methods for DLI Cross-Source Analysis
Using DEW to Manage Access Credentials for Data Sources
Using DLI Datasource Authentication to Manage Access Credentials for Data Sources
Overview
Creating a CSS Datasource Authentication
Creating a Kerberos Datasource Authentication
Creating a Kafka_SSL Datasource Authentication
Creating a Password Datasource Authentication
Datasource Authentication Permission Management
Managing Enhanced Datasource Connections
Enhanced Connection Permission Management
Binding an Enhanced Datasource Connection to an Elastic Resource Pool
Unbinding an Enhanced Datasource Connection from an Elastic Resource Pool
Adding a Route for an Enhanced Datasource Connection
Deleting the Route for an Enhanced Datasource Connection
Modifying Host Information in an Elastic Resource Pool
Enhanced Datasource Connection Tag Management
Deleting an Enhanced Datasource Connection
Example Typical Scenario: Connecting DLI to a Data Source on a Private Network
Example Typical Scenario: Connecting DLI to a Data Source on a Public Network
Configuring an Agency to Allow DLI to Access Other Cloud Services
DLI Agency Overview
Creating a Custom DLI Agency
Agency Permission Policies in Common Scenarios
Example of Configuring DLI Agency Permissions in Typical Scenarios
Submitting a SQL Job Using DLI
Creating and Submitting a SQL Job
Exporting SQL Job Results
Creating a SQL Inspection Rule
Setting the Priority for a SQL Job
Querying Logs for SQL Jobs
Managing SQL Jobs
Creating and Managing SQL Job Templates
Creating a SQL Job Template
Developing and Submitting a SQL Job Using a SQL Job Template
TPC-H Sample Data in the SQL Templates Preset on DLI
Submitting a Flink Job Using DLI
Flink Job Overview
Creating a Flink OpenSource SQL Job
Creating a Flink Jar Job
Configuring Flink Job Permissions
Managing Flink Jobs
Viewing Flink Job Details
Setting the Priority for a Flink Job
Enabling Dynamic Scaling for Flink Jobs
Querying Logs for Flink Jobs
Common Operations of Flink Jobs
Managing Flink Job Templates
Adding Tags to a Flink Job
Submitting a Spark Job Using DLI
Creating a Spark Job
Setting the Priority for a Spark Job
Querying Logs for Spark Jobs
Managing Spark Jobs
Managing Spark Job Templates
Using Cloud Eye to Monitor DLI
Using CTS to Audit DLI
Permissions Management
Overview
Creating a Custom Policy
DLI Resources
DLI Request Conditions
Common Operations Supported by DLI System Policy
Common DLI Management Operations
Using a Custom Image to Enhance the Job Running Environment
Managing DLI Global Variables
Managing Program Packages of Jar Jobs
Package Management Overview
Creating a Package
Configuring Package Permissions
Changing the Package Owner
Managing Package Tags
DLI Built-in Dependencies
Managing DLI Resource Quotas
Best Practices
Overview
Data Migration
Overview
Migrating Data from Hive to DLI
Migrating Data from MRS Kafka to DLI
Migrating Data from Elasticsearch to DLI
Migrating Data from RDS to DLI
Migrating Data from GaussDB(DWS) to DLI
Data Analysis
Analyzing Driving Behavior Data
Converting Data Format from CSV to Parquet
Analyzing E-commerce BI Reports
Analyzing DLI Billing Data
Using DLI Flink SQL to Analyze e-Commerce Business Data in Real Time
Interconnecting Yonghong BI with DLI to Submit Spark Jobs
Preparing for Yonghong BI Interconnection
Adding Yonghong BI Data Source
Creating Yonghong BI Data Set
Creating a Chart in Yonghong BI
Connections
Configuring the Connection Between a DLI Queue and a Data Source in a Private Network
Configuring the Connection Between a DLI Queue and a Data Source in the Internet
Developer Guide
SQL Jobs
Using Spark SQL Jobs to Analyze OBS Data
Calling UDFs in Spark SQL Jobs
Calling UDTFs in Spark SQL Jobs
Calling UDAFs in Spark SQL Jobs
Submitting a Spark SQL Job Using JDBC
Obtaining the Server Connection Address
Downloading the JDBC Driver Package
Performing Authentication
Submitting a Job Using JDBC
JDBC API Reference
Flink OpenSource SQL Jobs
Reading Data from Kafka and Writing Data to RDS
Reading Data from Kafka and Writing Data to GaussDB(DWS)
Reading Data from Kafka and Writing Data to Elasticsearch
Reading Data from MySQL CDC and Writing Data to GaussDB(DWS)
Reading Data from PostgreSQL CDC and Writing Data to GaussDB(DWS)
Configuring High-Reliability Flink Jobs (Automatic Restart upon Exceptions)
Flink Jar Jobs
Stream Ecosystem
Flink Jar Job Examples
Writing Data to OBS Using Flink Jar
Using Flink Jar to Connect to Kafka that Uses SASL_SSL Authentication
Using Flink Jar to Read and Write Data from and to DIS
Flink Job Agencies
Flink OpenSource SQL Jobs Using DEW to Manage Access Credentials
Flink Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS
Obtaining Temporary Credentials from a Flink Job's Agency for Accessing Other Cloud Services
Spark Jar Jobs
Using Spark Jar Jobs to Read and Query OBS Data
Using the Spark Job to Access DLI Metadata
Using Spark-submit to Submit a Spark Jar Job
Submitting a Spark Jar Job Using Livy
Using Spark Jobs to Access Data Sources of Datasource Connections
Overview
Connecting to CSS
CSS Security Cluster Configuration
Scala Example Code
PySpark Example Code
Java Example Code
Connecting to GaussDB(DWS)
Scala Example Code
PySpark Example Code
Java Example Code
Connecting to HBase
MRS Configuration
Scala Example Code
PySpark Example Code
Java Example Code
Troubleshooting
Connecting to OpenTSDB
Scala Example Code
PySpark Example Code
Java Example Code
Troubleshooting
Connecting to RDS
Scala Example Code
PySpark Example Code
Java Example Code
Connecting to Redis
Scala Example Code
PySpark Example Code
Java Example Code
Troubleshooting
Connecting to Mongo
Scala Example Code
PySpark Example Code
Java Example Code
Spark Job Agencies
Spark Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS
Obtaining Temporary Credentials from a Spark Job's Agency for Accessing Other Cloud Services
Spark SQL Syntax Reference
Common Configuration Items
Spark SQL Syntax
Spark Open Source Commands
Databases
Creating a Database
Deleting a Database
Viewing a Specified Database
Viewing All Databases
Tables
Creating an OBS Table
Creating an OBS Table Using the DataSource Syntax
Creating an OBS Table Using the Hive Syntax
Creating a DLI Table
Creating a DLI Table Using the DataSource Syntax
Creating a DLI Table Using the Hive Syntax
Deleting a Table
Viewing a Table
Viewing All Tables
Viewing Table Creation Statements
Viewing Table Properties
Viewing All Columns in a Specified Table
Viewing All Partitions in a Specified Table
Viewing Table Statistics
Modifying a Table
Adding a Column
Modifying Column Comments
Enabling or Disabling Multiversion Backup
Partition-related Syntax
Adding Partition Data (Only OBS Tables Supported)
Renaming a Partition (Only OBS Tables Supported)
Deleting a Partition
Deleting Partitions by Specifying Filter Criteria (Only Supported on OBS Tables)
Altering the Partition Location of a Table (Only OBS Tables Supported)
Updating Partitioned Table Data (Only OBS Tables Supported)
Updating Table Metadata with REFRESH TABLE
Backing Up and Restoring Data of Multiple Versions
Setting the Retention Period for Multiversion Backup Data
Checking Multiversion Backup Data
Restoring Multiversion Backup Data
Configuring the Trash Bin for Expired Multiversion Data
Deleting Multiversion Backup Data
Table Lifecycle Management
Specifying the Lifecycle of a Table When Creating the Table
Modifying the Lifecycle of a Table
Disabling or Restoring the Lifecycle of a Table
Data
Importing Data
Inserting Data
Clearing Data
Exporting Query Results
Datasource Connections
Creating a Datasource Connection with an HBase Table
Creating a DLI Table and Associating It with HBase
Inserting Data to an HBase Table
Querying an HBase Table
Creating a Datasource Connection with an OpenTSDB Table
Creating a DLI Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Creating a Datasource Connection with a DWS Table
Creating a DLI Table and Associating It with DWS
Inserting Data to the DWS Table
Querying the DWS Table
Creating a Datasource Connection with an RDS Table
Creating a DLI Table and Associating It with RDS
Inserting Data to the RDS Table
Querying the RDS Table
Creating a Datasource Connection with a CSS Table
Creating a DLI Table and Associating It with CSS
Inserting Data to the CSS Table
Querying the CSS Table
Creating a Datasource Connection with a DCS Table
Creating a DLI Table and Associating It with DCS
Inserting Data to a DCS Table
Querying the DCS Table
Creating a Datasource Connection with a DDS Table
Creating a DLI Table and Associating It with DDS
Inserting Data to the DDS Table
Querying the DDS Table
Creating a Datasource Connection with an Oracle Table
Creating a DLI Table and Associating It with Oracle
Inserting Data to an Oracle Table
Querying an Oracle Table
Views
Creating a View
Deleting a View
Viewing the Execution Plan
Data Permissions
Data Permissions List
Creating a Role
Deleting a Role
Binding a Role
Unbinding a Role
Displaying a Role
Granting a Permission
Revoking a Permission
Displaying the Granted Permissions
Displaying the Binding Relationship Between All Roles and Users
Data Types
Overview
Primitive Data Types
Complex Data Types
User-Defined Functions
Creating a Function
Deleting a Function
Displaying Function Details
Displaying All Functions
Built-In Functions
Date Functions
Overview
add_months
current_date
current_timestamp
date_add
dateadd
date_sub
date_format
datediff
datediff1
datepart
datetrunc
day/dayofmonth
from_unixtime
from_utc_timestamp
getdate
hour
isdate
last_day
lastday
minute
month
months_between
next_day
quarter
second
to_char
to_date
to_date1
to_utc_timestamp
trunc
unix_timestamp
weekday
weekofyear
year
String Functions
Overview
ascii
concat
concat_ws
char_matchcount
encode
find_in_set
get_json_object
instr
instr1
initcap
keyvalue
length
lengthb
levenshtein
locate
lower/lcase
lpad
ltrim
parse_url
printf
regexp_count
regexp_extract
replace
regexp_replace
regexp_replace1
regexp_instr
regexp_substr
repeat
reverse
rpad
rtrim
soundex
space
substr/substring
substring_index
split_part
translate
trim
upper/ucase
Mathematical Functions
Overview
abs
acos
asin
atan
bin
bround
cbrt
ceil
conv
cos
cot1
degrees
e
exp
factorial
floor
greatest
hex
least
ln
log
log10
log2
median
negative
percentlie
percentlie_approx
pi
pmod
positive
pow
radians
rand
round
shiftleft
shiftright
shiftrightunsigned
sign
sin
sqrt
tan
Aggregate Functions
Overview
avg
corr
count
covar_pop
covar_samp
max
min
percentile
percentile_approx
stddev_pop
stddev_samp
sum
variance/var_pop
var_samp
Window Functions
Overview
cume_dist
first_value
last_value
lag
lead
percent_rank
rank
row_number
Other Functions
Overview
decode1
javahash
max_pt
ordinal
trans_array
trunc_numeric
url_decode
url_encode
SELECT
Basic Statements
Sort
ORDER BY
SORT BY
CLUSTER BY
DISTRIBUTE BY
Grouping
Column-Based GROUP BY
Expression-Based GROUP BY
Using HAVING in GROUP BY
ROLLUP
GROUPING SETS
Joins
INNER JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
FULL OUTER JOIN
IMPLICIT JOIN
Cartesian JOIN
LEFT SEMI JOIN
NON-EQUIJOIN
Clauses
FROM
OVER
WHERE
HAVING
Multi-Layer Nested Subquery
Alias
Table Alias
Column Alias
Set Operations
UNION
INTERSECT
EXCEPT
WITH...AS
CASE...WHEN
Basic CASE Statement
CASE Query Statement
Identifiers
aggregate_func
alias
attr_expr
attr_expr_list
attrs_value_set_expr
boolean_expression
class_name
col
col_comment
col_name
col_name_list
condition
condition_list
cte_name
data_type
db_comment
db_name
else_result_expression
file_format
file_path
function_name
groupby_expression
having_condition
hdfs_path
input_expression
input_format_classname
jar_path
join_condition
non_equi_join_condition
number
num_buckets
output_format_classname
partition_col_name
partition_col_value
partition_specs
property_name
property_value
regex_expression
result_expression
row_format
select_statement
separator
serde_name
sql_containing_cte_name
sub_query
table_comment
table_name
table_properties
table_reference
view_name
view_properties
when_expression
where_condition
window_function
Operators
Relational Operators
Arithmetic Operators
Logical Operators
Flink SQL Syntax Reference
Flink OpenSource SQL 1.15 Syntax Reference
Constraints and Definitions
Supported Data Types
Reserved Keywords
Data Definition Language (DDL)
CREATE TABLE
CREATE CATALOG
CREATE DATABASE
CREATE VIEW
CREATE FUNCTION
Data Manipulation Language (DML)
Overview
Flink OpenSource SQL 1.15 Usage
Formats
Overview
Avro
Canal
Confluent Avro
CSV
Debezium
JSON
Maxwell
Ogg
Orc
Parquet
Raw
Connectors
Overview
BlackHole
ClickHouse
DataGen
Doris
Overview
Source Table
Result Table
Dimension Table
GaussDB(DWS)
Overview
GaussDB(DWS) Source Table (Not Recommended)
GaussDB(DWS) Result Table (Not Recommended)
GaussDB(DWS) Dimension Table (Not Recommended)
Elasticsearch
FileSystem
Source Table
Result Table
HBase
Source Table
Result Table
Dimension Table
Hive
Creating a Hive Catalog
Hive Dialect
Source Table
Result Table
Hive Dimension Table
Using Temporal Join to Associate the Latest Partition of a Dimension Table
Using Temporal Join to Associate the Latest Version of a Dimension Table
JDBC
Kafka
Print
Redis
Source Table
Result Table
Dimension Table
Upsert Kafka
DML Snytax
SELECT
INSERT INTO
Set Operations
Window
Window Functions
Window Aggregation
Window Top-N
Window Deduplication
Window Join
Group Aggregation
Over Aggregation
JOIN
OrderBy & Limit
Top-N
Deduplication
Functions
UDFs
Type Inference
Parameter Transfer
Built-In Functions
Comparison Functions
Logical Functions
Arithmetic Functions
String Functions
Temporal Functions
Conditional Functions
Type Conversion Functions
Collection Functions
JSON Functions
Value Construction Functions
Value Retrieval Functions
Grouping Functions
Hash Functions
Aggregate Functions
Table-Valued Functions
string_split
Flink OpenSource SQL 1.12 Syntax Reference
Constraints and Definitions
Supported Data Types
Syntax
Data Definition Language (DDL)
CREATE TABLE
CREATE VIEW
CREATE FUNCTION
Data Manipulation Language (DML)
Overview
DDL Syntax
Creating Source Tables
DataGen Source Table
GaussDB(DWS) Source Table
HBase Source Table
JDBC Source Table
Kafka Source Table
MySQL CDC Source Table
Postgres CDC Source Table
Redis Source Table
Upsert Kafka Source Table
FileSystem Source Table
Creating Result Tables
BlackHole Result Table
ClickHouse Result Table
GaussDB(DWS) Result Table
Elasticsearch Result Table
HBase Result Table
JDBC Result Table
Kafka Result Table
Print Result Table
Redis Result Table
Upsert Kafka Result Table
FileSystem Result Table
Creating Dimension Tables
GaussDB(DWS) Dimension Table
HBase Dimension Table
JDBC Dimension Table
Redis Dimension Table
Format
Avro
Canal
Confluent Avro
CSV
Debezium
JSON
Maxwell
Raw
DML Snytax
SELECT
Set Operations
Window
JOIN
OrderBy & Limit
Top-N
Deduplication
Functions
User-Defined Functions (UDFs)
Type Inference
Parameter Transfer
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Conditional Functions
Type Conversion Functions
Collection Functions
Value Construction Functions
Value Access Functions
Hash Functions
Aggregate Functions
Table-Valued Functions
string_split
Flink Opensource SQL 1.10 Syntax Reference
Constraints and Definitions
Supported Data Types
Syntax Definition
Data Definition Language (DDL)
CREATE TABLE
CREATE VIEW
CREATE FUNCTION
Data Manipulation Language (DML)
Flink OpenSource SQL 1.10 Syntax
Data Definition Language (DDL)
Creating a Source Table
Kafka Source Table
DIS Source Table
JDBC Source Table
GaussDB(DWS) Source Table
Redis Source Table
HBase Source Table
userDefined Source Table
Creating a Result Table
ClickHouse Result Table
Kafka Result Table
Upsert Kafka Result Table
DIS Result Table
JDBC Result Table
GaussDB(DWS) Result Table
Redis Result Table
SMN Result Table
HBase Result Table
Elasticsearch Result Table
OpenTSDB Result Table
User-defined Result Table
Print Result Table
File System Result Table
Creating a Dimension Table
JDBC Dimension Table
GaussDB(DWS) Dimension Table
HBase Dimension Table
Data Manipulation Language (DML)
SELECT
Set Operations
Window
JOIN
OrderBy & Limit
Top-N
Deduplication
Functions
User-Defined Functions
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Conditional Functions
Type Conversion Function
Collection Functions
Value Construction Functions
Value Access Functions
Hash Functions
Aggregate Function
Table-Valued Functions
split_cursor
string_split
Historical Version
Flink SQL Syntax (This Syntax Will Not Evolve. Use FlinkOpenSource SQL Instead.)
Constraints and Definitions
Overview
Creating a Source Stream
CloudTable HBase Source Stream
DIS Source Stream
DMS Source Stream
MRS Kafka Source Stream
Open-Source Kafka Source Stream
OBS Source Stream
Creating a Sink Stream
CloudTable HBase Sink Stream
CloudTable OpenTSDB Sink Stream
MRS OpenTSDB Sink Stream
CSS Elasticsearch Sink Stream
DCS Sink Stream
DDS Sink Stream
DIS Sink Stream
DMS Sink Stream
DWS Sink Stream (JDBC Mode)
DWS Sink Stream (OBS-based Dumping)
MRS HBase Sink Stream
MRS Kafka Sink Stream
Open-Source Kafka Sink Stream
File System Sink Stream (Recommended)
OBS Sink Stream
RDS Sink Stream
SMN Sink Stream
Creating a Temporary Stream
Creating a Dimension Table
Creating a Redis Table
Creating an RDS Table
Custom Stream Ecosystem
Custom Source Stream
Custom Sink Stream
Data Manipulation Language (DML)
SELECT
Condition Expression
Window
JOIN Between Stream Data and Table Data
Data Types
User-Defined Functions
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Type Conversion Functions
Aggregate Functions
Table-Valued Functions
Other Functions
Geographical Functions
Configuring Time Models
Pattern Matching
StreamingML
Anomaly Detection
Time Series Forecasting
Real-Time Clustering
Deep Learning Model Prediction
Reserved Keywords
API Reference
Before You Start
Overview
API Calling
Endpoints
Constraints
Basic Concepts
Overview
Calling APIs
Making an API Request
Authentication
Returned Values
Getting Started
Creating and Submitting a SQL Job
Creating and Submitting a Spark Job
Creating and Submitting a Flink Job
Creating and Using a Datasource Connection
Permission-related APIs
Granting Data Access Control to Users or Projects
Checking the Permissions Granted to a User
Global Variable-related APIs
Creating a Global Variable
Deleting a Global Variable
Modifying a Global Variable
Querying All Global Variables
APIs Related to Resource Tags
Batch Adding Resource Tags
Batch Deleting Resource Tags
Querying the Number of Resource Instances
Listing Resource Instances
Querying Tags of a Specified Resource Type
Querying Tags of a Specified Resource Instance
APIs Related to Enhanced Datasource Connections
Creating an Enhanced Datasource Connection
Deleting an Enhanced Datasource Connection
Listing Enhanced Datasource Connections
Querying an Enhanced Datasource Connection
Binding a Queue
Unbinding a Queue
Modifying Host Information
Querying Authorization of an Enhanced Datasource Connection
Creating a Route
Deleting a Route
Datasource Authentication-related APIs
Creating Datasource Authentication
Listing Datasource Authentications
Updating Datasource Authentication
Deleting Datasource Authentication
APIs Related to Elastic Resource Pools
Creating an Elastic Resource Pool
Querying All Elastic Resource Pools
Deleting an Elastic Resource Pool
Modifying Elastic Resource Pool Information
Querying All Queues in an Elastic Resource Pool
Associating a Queue with an Elastic Resource Pool
Viewing Scaling History of an Elastic Resource Pool
Modifying the Scaling Policy of a Queue Associated with an Elastic Resource Pool
Queue-related APIs (Recommended)
Creating a Queue
Deleting a Queue
Querying All Queues
Viewing Details of a Queue
Restarting, Scaling Out, and Scaling In Queues
Creating an Address Connectivity Test Request
Querying Connectivity Test Details of a Specified Address
SQL Job-related APIs
Submitting a SQL Job (Recommended)
Canceling a Job (Recommended)
Querying All Jobs
Previewing SQL Job Query Results
Exporting Query Results
Querying Job Status
Querying Job Details
Checking SQL Syntax
Querying the Job Execution Progress
SQL Template-related APIs
Saving a SQL template
Checking All SQL Templates
Updating a SQL template
Deleting a SQL template
Flink Job-related APIs
Creating a SQL Job
Updating a SQL Job
Creating a Flink Jar job
Updating a Flink Jar Job
Running Jobs in Batches
Listing Jobs
Querying Job Details
Querying the Job Execution Plan
Stopping Jobs in Batches
Deleting a Job
Deleting Jobs in Batches
Exporting a Flink Job
Importing a Flink Job
Generating a Static Stream Graph for a Flink SQL Job
APIs Related to Flink Job Templates
Creating a Template
Updating a Template
Deleting a Template
Listing Templates
Flink Job Management APIs
Triggering Savepoints for Flink Jobs
Importing Savepoints for Flink Jobs
Spark Job-related APIs
Creating a Batch Processing Job
Listing Batch Processing Jobs
Querying Batch Job Details
Querying a Batch Job Status
Canceling a Batch Processing Job
APIs Related to Spark Job Templates
Creating a Job Template
Listing Job Templates
Modifying a Job Template
Obtaining a Job Template
Permissions Policies and Supported Actions
Out-of-Date APIs
Agency-related APIs (Discarded)
Obtaining DLI Agency Information (Discarded)
Creating a DLI Agency (Discarded)
Package Group-related APIs (Discarded)
Uploading a Package Group (Discarded)
Listing Package Groups (Discarded)
Uploading a JAR Package Group (Discarded)
Uploading a PyFile Package Group (Discarded)
Uploading a File Package Group (Discarded)
Querying Resource Packages in a Group (Discarded)
Deleting a Resource Package from a Group (Discarded)
Changing the Owner of a Group or Resource Package (Discarded)
APIs Related to Spark Batch Processing (Discarded)
Querying Batch Job Logs (Discarded)
SQL Job-related APIs (Discarded)
Importing Data (Discarded)
Exporting Data (Discarded)
Resource-related APIs (Discarded)
Database-related APIs (Discarded)
Creating a Database (Discarded)
Deleting a Database (Discarded)
Querying All Databases (Discarded)
Modifying a Database Owner (Discarded)
Table-related APIs (Discarded)
Creating a Table (Discarded)
Deleting a Table (Discarded)
Querying All Tables (Discarded)
Describing Table Information (Discarded)
Previewing Table Content (Discarded)
Listing Partitions (Discarded)
Permission-related APIs (Discarded)
Granting Queue Permissions to a User (Discarded)
Querying Queue Users (Discarded)
Granting Data Permission to Users (Discarded)
Querying Database Users (Discarded)
Querying Table Users (Discarded)
Querying a User's Table Permissions (Discarded)
Queue-related APIs (Discarded)
Creating a Scheduled CU Change (Discarded)
Viewing a Scheduled CU Change (Discarded)
Deleting Scheduled CU Changes in Batches (Discarded)
Deleting a Scheduled CU Change (Discarded)
Modifying a Scheduled CU Change (Discarded)
Datasource Authentication-related APIs (Discarded)
Creating Datasource Authentication (Discarded)
Listing Datasource Authentication Information (Discarded)
Updating Datasource Authentication (Discarded)
Deleting Datasource Authentication (Discarded)
APIs Related to Enhanced Datasource Connections (Discarded)
Creating a Route (Discarded)
Deleting a Route (Discarded)
Template-related APIs (Discarded)
Querying All Sample SQL Templates (Discarded)
Table-related APIs (Discarded)
Querying All Tables (Discarded)
APIs Related to SQL Jobs (Discarded)
Submitting a SQL Job (Discarded)
Canceling a Job (Discarded)
Querying the Job Execution Result-Method 1 (Discarded)
Querying the Job Execution Result-Method 2 (Discarded)
APIs Related to Data Upload (Discarded)
Authenticating a Created Data Uploading Job (Discarded)
Cluster-related APIs
Creating a Cluster (Discarded)
Deleting a Cluster (Discarded)
Querying Information of a Specified Cluster (Discarded)
Querying All Cluster Information (Discarded)
APIs Related to Flink Jobs (Discarded)
Querying Job Monitoring Information (Discarded)
Granting OBS Permissions to DLI
Public Parameters
Status Codes
Error Codes
Obtaining a Project ID
Obtaining an Account ID
SDK Reference
Overview
(Recommended) DLI's SDK V3
DLI SDK
DLI SDK Function Matrix
Mapping Between DLI SDKs and APIs
Java SDK
Overview
Configuring the Java SDK Environment
Preparing a Java Development Environment
Obtaining and Installing the SDK
Initializing the DLI Client
OBS Authorization
Queue-Related SDKs
Resource-Related SDKs
SDKs Related to SQL Jobs
Database-Related SDKs
Table-Related SDKs
Job-related SDKs
SDKs Related to Flink Jobs
SDKs Related to Spark Jobs
SDKs Related to Flink Job Templates
Python SDK
Overview
Configuring the Python SDK Environment
Preparing a Python Development Environment
Obtaining and Installing SDKs
Initializing the DLI Client
Queue-Related SDKs
Resource-Related SDKs
SDKs Related to SQL Jobs
Database-Related SDKs
Table-Related SDKs
Job-related SDKs
SDKs Related to Spark Jobs
Change History
FAQs
DLI Basics
What Are the Differences Between DLI Flink and MRS Flink?
What Are the Differences Between MRS Spark and DLI Spark?
How Do I Upgrade the Engine Version of a DLI Job?
Where Can Data Be Stored in DLI?
Can I Import OBS Bucket Data Shared by Other Tenants into DLI?
Regions and AZs
Can a Member Account Use Global Variables Created by Other Member Accounts?
Is DLI Affected by the Apache Spark Command Injection Vulnerability (CVE-2022-33891)?
How Do I Manage Jobs Running on DLI?
How Do I Change the Field Names of an Existing Table on DLI?
DLI Elastic Resource Pools and Queues
How Can I Check the Actual and Used CUs for an Elastic Resource Pool as Well as the Required CUs for a Job?
How Do I Check for a Backlog of Jobs in the Current DLI Queue?
How Do I View the Load of a DLI Queue?
How Do I Monitor Job Exceptions on a DLI Queue?
How Do I Migrate an Old Version Spark Queue to a General-Purpose Queue?
How Do I Do If I Encounter a Timeout Exception When Executing DLI SQL Statements on the default Queue?
DLI Databases and Tables
Why Am I Unable to Query a Table on the DLI Console?
How Do I Do If the Compression Rate of an OBS Table Is High?
How Do I Do If Inconsistent Character Encoding Leads to Garbled Characters?
Do I Need to to Regrant Permissions to Users and Projects After Deleting and Recreating a Table With the Same Name?
How Do I Do If Files Imported Into a DLI Partitioned Table Lack Data for the Partition Columns, Causing Query Failures After the Import Is Completed?
How Do I Fix Incorrect Data in an OBS Foreign Table Caused by Newline Characters in OBS File Fields?
How Do I Prevent a Cartesian Product Query and Resource Overload Due to Missing "ON" Conditions in Table Joins?
How Do I Do If I Can't Query Data After Manually Adding It to the Partition Directory of an OBS Table?
Why Does the "insert overwrite" Operation Affect All Data in a Partitioned Table Instead of Just the Targeted Partition?
Why Does the "create_date" Field in an RDS Table (Datetime Data Type) Appear as a Timestamp in DLI Queries?
How Do I Do If Renaming a Table After a SQL Job Causes Incorrect Data Size?
How Can I Resolve Data Inconsistencies When Importing Data from DLI to OBS?
Enhanced Datasource Connections
How Do I Do If I Can't Bind an Enhanced Datasource Connection to a Queue?
How Do I Resolve a Failure in Connecting DLI to GaussDB(DWS) Through an Enhanced Datasource Connection?
How Do I Do If the Datasource Connection Is Successfully Created but the Network Connectivity Test Fails?
How Do I Configure Network Connectivity Between a DLI Queue and a Data Source?
Why Is Creating a VPC Peering Connection Necessary for Enhanced Datasource Connections in DLI?
How Do I Do If Creating a Datasource Connection in DLI Gets Stuck in the "Creating" State When Binding It to a Queue?
How Do I Resolve the "communication link failure" Error When Using a Newly Created Datasource Connection That Appears to Be Activated?
How Do I Troubleshoot a Connection Timeout Issue That Isn't Recorded in Logs When Accessing MRS HBase Through a Datasource Connection?
How Do I Fix the "Failed to get subnet" Error When Creating a Datasource Connection in DLI?
How Do I Do If I Encounter the "Incorrect string value" Error When Executing insert overwrite on a Datasource RDS Table?
How Do I Resolve the Null Pointer Error When Creating an RDS Datasource Table?
Error Message "org.postgresql.util.PSQLException: ERROR: tuple concurrently updated" Is Displayed When the System Executes insert overwrite on a Datasource GaussDB(DWS) Table
RegionTooBusyException Is Reported When Data Is Imported to a CloudTable HBase Table Through a Datasource Table
How Do I Do If A Null Value Is Written Into a Non-Null Field When Using a DLI Datasource Connection to Connect to a GaussDB(DWS) Table?
How Do I Do If an Insert Operation Failed After the Schema of the GaussDB(DWS) Source Table Is Updated?
How Do I Insert Data into an RDS Table with an Auto-Increment Primary Key Using DLI?
SQL Jobs
SQL Job Development
SQL Jobs
How Do I Merge Small Files?
How Do I Use DLI to Access Data in an OBS Bucket?
How Do I Specify an OBS Path When Creating an OBS Table?
How Do I Create a Table Using JSON Data in an OBS Bucket?
How Can I Use the count Function to Perform Aggregation?
How Do I Synchronize DLI Table Data Across Regions?
How Do I Insert Table Data into Specific Fields of a Table Using a SQL Job?
How Do I Troubleshoot Slow SQL Jobs?
How Do I View DLI SQL Logs?
How Do I View SQL Execution Records in DLI?
How Do I Do When Data Skew Occurs During the Execution of a SQL Job?
Why Does a SQL Job That Has Join Operations Stay in the Running State?
Why Is a SQL Job Stuck in the Submitting State?
SQL Job O&M
Why Is Error "path obs://xxx already exists" Reported When Data Is Exported to OBS?
Why Is Error "SQL_ANALYSIS_ERROR: Reference 't.id' is ambiguous, could be: t.id, t.id.;" Displayed When Two Tables Are Joined?
Why Is Error "The current account does not have permission to perform this operation,the current account was restricted. Restricted for no budget." Reported when a SQL Statement Is Executed?
Why Is Error "There should be at least one partition pruning predicate on partitioned table XX.YYY" Reported When a Query Statement Is Executed?
Why Is Error "IllegalArgumentException: Buffer size too small. size" Reported When Data Is Loaded to an OBS Foreign Table?
Why Is Error "DLI.0002 FileNotFoundException" Reported During SQL Job Running?
Why Is a Schema Parsing Error Reported When I Create a Hive Table Using CTAS?
Why Is Error "org.apache.hadoop.fs.obs.OBSIOException" Reported When I Run DLI SQL Scripts on DataArts Studio?
Why Is Error "UQUERY_CONNECTOR_0001:Invoke DLI service api failed" Reported in the Job Log When I Use CDM to Migrate Data to DLI?
Why Is Error "File not Found" Reported When I Access a SQL Job?
Why Is Error "DLI.0003: AccessControlException XXX" Reported When I Access a SQL Job?
Why Is Error "DLI.0001: org.apache.hadoop.security.AccessControlException: verifyBucketExists on {{bucket name}}: status [403]" Reported When I Access a SQL Job?
Why Am I Seeing the Error Message "The current account does not have permission to perform this operation,the current account was restricted. Restricted for no budget" When Executing a SQL Statement?
Flink Jobs
Flink Job Consulting
What Data Formats and Sources Are Supported by DLI Flink Jobs?
How Do I Authorize a Subuser to View Flink Jobs?
How Do I Configure Auto Restart upon Exception for a Flink Job?
How Do I Save Logs for Flink Jobs?
Why Is Error "No such user. userName:xxxx." Reported on the Flink Job Management Page When I Grant Permission to a User?
How Do I Restore a Flink Job from a Specific Checkpoint After Manually Stopping the Job?
Why Is a Message Displayed Indicating That the SMN Topic Does Not Exist When I Use the SMN Topic in DLI?
Flink SQL Jobs
How Do I Map an OBS Table to a DLI Partitioned Table?
How Do I Change the Number of Kafka Partitions in a Flink SQL Job Without Stopping It?
How Do I Fix the DLI.0005 Error When Using EL Expressions to Create a Table in a Flink SQL Job?
Why Is No Data Queried in the DLI Table Created Using the OBS File Path When Data Is Written to OBS by a Flink Job Output Stream?
Why Does a Flink SQL Job Fails to Be Executed, and Is "connect to DIS failed java.lang.IllegalArgumentException: Access key cannot be null" Displayed in the Log?
Data Writing Fails After a Flink SQL Job Consumed Kafka and Sank Data to the Elasticsearch Cluster
How Does Flink Opensource SQL Parse Nested JSON?
Why Is the Time Read by a Flink OpenSource SQL Job from the RDS Database Is Different from the RDS Database Time?
Why Does Job Submission Fail When the failure-handler Parameter of the Elasticsearch Result Table for a Flink Opensource SQL Job Is Set to retry_rejected?
How Do I Configure Connection Retries for Kafka Sink If it is Disconnected?
How Do I Write Data to Different Elasticsearch Clusters in a Flink Job?
Why Does DIS Stream Not Exist During Job Semantic Check?
Why Is Error "Timeout expired while fetching topic metadata" Repeatedly Reported in Flink JobManager Logs?
Flink Jar Jobs
Can I Upload Configuration Files for Flink Jar Jobs?
Why Does a Flink Jar Package Conflict Result in Job Submission Failure?
Why Does a Flink Jar Job Fail to Access GaussDB(DWS) and a Message Is Displayed Indicating Too Many Client Connections?
Why Is Error Message "Authentication failed" Displayed During Flink Jar Job Running?
Why Is Error Invalid OBS Bucket Name Reported After a Flink Job Submission Failed?
Why Does the Flink Submission Fail Due to Hadoop JAR File Conflict?
How Do I Locate a Flink Job Submission Error?
Flink Job Performance Tuning
What Is the Recommended Configuration for a Flink Job?
Flink Job Performance Tuning
How Do I Prevent Data Loss After Flink Job Restart?
How Do I Locate a Flink Job Running Error?
How Can I Check if a Flink Job Can Be Restored From a Checkpoint After Restarting It?
Why Are Logs Not Written to the OBS Bucket After a DLI Flink Job Fails to Be Submitted for Running?
Why Is the Flink Job Abnormal Due to Heartbeat Timeout Between JobManager and TaskManager?
Spark Jobs
Spark Job Development
Spark Jobs
How Do I Use Spark to Write Data into a DLI Table?
How Do I Set the AK/SK for a Queue to Operate an OBS Table?
How Do I View the Resource Usage of DLI Spark Jobs?
How Do I Use Python Scripts to Access the MySQL Database If the pymysql Module Is Missing from the Spark Job Results Stored in MySQL?
How Do I Run a Complex PySpark Program in DLI?
How Do I Use JDBC to Set the spark.sql.shuffle.partitions Parameter to Improve the Task Concurrency?
How Do I Read Uploaded Files for a Spark Jar Job?
Why Can't I Find the Specified Python Environment After Adding the Python Package?
Why Is a Spark Jar Job Stuck in the Submitting State?
Spark Job O&M
What Can I Do When Receiving java.lang.AbstractMethodError in the Spark Job?
Why Do I Get "ResponseCode: 403" and "ResponseStatus: Forbidden" Errors When a Spark Job Accesses OBS Data?
Why Do I Encounter the Error "verifyBucketExists on XXXX: status [403]" When Using a Spark Job to Access an OBS Bucket That I Have Permission to Access?
Why Does a Job Running Timeout Occur When Processing a Large Amount of Data with a Spark Job?
Why Does a Spark Job Fail to Execute with an Abnormal Access Directory Error When Accessing Files in SFTP?
Why Does the Job Fail to Be Executed Due to Insufficient Database and Table Permissions?
Why Is the global_temp Database Missing in the Job Log of Spark 3.x?
Why Does Using DataSource Syntax to Create an OBS Table of Avro Type Fail When Accessing Metadata With Spark 2.3.x?
DLI Resource Quotas
What Is User Quota?
How Do I View My Quotas?
How Do I Apply for a Higher Quota?
DLI Permissions Management
How Do I Do If I Receive an Error Message Stating That I Do Not Have Sufficient Permissions When Creating a Table After Upgrading the Engine Version of a Queue?
What Is Column-Level Authorization for DLI Partitioned Tables?
How Do I Do If I Encounter Insufficient Permissions While Updating Packages?
Why Is Error "DLI.0003: Permission denied for resource..." Reported When I Run a SQL Statement?
How Do I Do If I Can't Query Table Data After Being Granted Table Permissions?
Will Granting Duplicate Permissions to a Table After Inheriting Database Permissions Cause an Error?
Why Can't I Query a View After I'm Granted the Select Table Permission on the View?
How Do I Do If I Receive a Message Saying I Don't Have Sufficient Permissions to Submit My Jobs to the Job Bucket?
How Do I Resolve an Unauthorized OBS Bucket Error?
DLI APIs
How Do I Obtain the AK/SK Pair?
How Do I Obtain the Project ID?
Why Is Error "unsupported media Type" Reported When I Subimt a SQL Job?
What Can I Do If an Error Is Reported When the Execution of the API for Creating a SQL Job Times Out?
How Can I Fix Garbled Chinese Characters Returned by an API?
More Documents
User Guide (ME-Abu Dhabi Region)
DLI Introduction
Getting Started
Creating and Submitting a Spark SQL Job
Developing and Submitting a Spark SQL Job Using the TPC-H Sample Template
Creating and Submitting a Spark Jar Job
Creating and Submitting a Flink SQL Job
DLI Console Overview
SQL Editor
Job Management
SQL Job Management
Flink Job Management
Overview
Managing Flink Job Permissions
Preparing Flink Job Data
Creating a Flink SQL job
Creating a Flink Jar Job
Debugging a Flink Job
Performing Operations on a Flink Job
Flink Job Details
Spark Job Management
Spark Job Management
Creating a Spark Job
Queue Management
Overview
Queue Permission Management
Creating a Queue
Deleting a Queue
Modifying the CIDR Block
Elastic Scaling
Scheduling CU Changes
Testing Address Connectivity
Creating a Message Notification Topic
Data Management
Databases and Tables
Overview
Database Permission Management
Table Permission Management
Creating a Database or a Table
Deleting a Database or a Table
Modifying the Owners of Databases and Tables
Importing Data to the Table
Exporting Data from DLI to OBS
Viewing Metadata
Previewing Data
Package Management
Overview
Managing Permissions on Packages and Package Groups
Creating a Package
Deleting a Package
Modifying the Owner
Built-in Dependencies
Job Templates
SQL Template Management
Flink Template Management
Appendix
TPC-H Sample Data in the SQL Template
Datasource Connections
Datasource Connection and Cross-Source Analysis
Enhanced Datasource Connections
Overview
Creating, Querying, and Deleting an Enhanced Datasource Connection
Binding and Unbinding a Queue
Modifying Host Information
Custom Route Information
Enhanced Datasource Connection Permission Management
Managing Datasource Connection Permissions
Creating and Managing Datasource Authentication
Global Configuration
Global Variables
Service Authorization
UDFs
Permissions Management
Overview
Creating an IAM User and Granting Permissions
Creating a Custom Policy
DLI Resources
DLI Request Conditions
Common Operations Supported by DLI System Policy
Change History
API Reference (ME-Abu Dhabi Region)
Before You Start
Overview
API Calling
Endpoints
Constraints
Basic Concepts
Overview
Calling APIs
Making an API Request
Authentication
Returned Values
Getting Started
Creating a Queue
Creating and Submitting a SQL Job
Creating and Submitting a Spark Job
Creating and Submitting a Flink Job
Creating and Using a Datasource Connection
Permission-related APIs
Granting Users with the Queue Usage Permission
Querying Queue Users
Granting Data Permission to Users
Querying Database Users
Querying Table Users
Querying a User's Table Permissions
Viewing the Granted Permissions of a User
Queue-related APIs (Recommended)
Creating a Queue
Deleting a Queue
Querying All Queues
Viewing Details of a Queue
Restarting, Scaling Out, and Scaling In Queues
Creating a Scheduled CU Change
Viewing a Scheduled CU Change
Deleting Scheduled CU Changes in Batches
Deleting a Scheduled CU Change
Modifying a Scheduled CU Change
APIs Related to SQL Jobs
Database-related APIs
Creating a Database
Deleting a Database
Querying All Databases
Modifying a Database Owner
Table-related APIs
Creating a Table
Deleting a Table
Querying All Tables (Recommended)
Describing the Table Information
Previewing Table Content
Obtaining the Partition List
Job-related APIs
Importing Data
Exporting Data
Submitting a SQL Job (Recommended)
Canceling a Job (Recommended)
Querying All Jobs
Previewing SQL Job Query Results
Querying Job Status
Querying Job Details
Checking SQL Syntax
Exporting Query Results
Querying the Job Execution Progress
Package Group-related APIs
Uploading a Package Group
Querying Package Group List
Uploading a JAR Package Group
Uploading a PyFile Package Group
Uploading a File Package Group
Querying Resource Packages in a Group
Deleting a Resource Package from a Group
Changing the Owner of a Group or Resource Package
APIs Related to Flink Jobs
Granting OBS Permissions to DLI
Creating a SQL Job
Updating a SQL Job
Creating a Flink Jar job
Updating a Flink Jar Job
Running Jobs in Batches
Querying the Job List
Querying Job Details
Querying the Job Execution Plan
Stopping Jobs in Batches
Deleting a Job
Deleting Jobs in Batches
Exporting a Flink Job
Importing a Flink Job
APIs Related to Spark jobs
Batch Processing-related APIs
Creating a Batch Processing Job
Canceling a Batch Processing Job
Obtaining the List of Batch Processing Jobs
Querying Batch Job Details
Querying a Batch Job Status
Querying Batch Job Logs
APIs Related to Flink Job Templates
Creating a Template
Updating a Template
Deleting a Template
Querying the Template List
APIs Related to Enhanced Datasource Connections
Creating an Enhanced Datasource Connection
Deleting an Enhanced Datasource Connection
Querying an Enhanced Datasource Connection List
Querying an Enhanced Datasource Connection
Binding a Queue
Unbinding a Queue
Modifying the Host Information
Querying Authorization of an Enhanced Datasource Connection
Global Variable-related APIs
Creating a Global Variable
Deleting a Global Variable
Modifying a Global Variable
Querying All Global Variables
Public Parameters
Status Codes
Error Code
Obtaining a Project ID
Obtaining an Account ID
Change History
SQL Syntax Reference (ME-Abu Dhabi Region)
Spark SQL Syntax Reference
Common Configuration Items of Batch SQL Jobs
SQL Syntax Overview of Batch Jobs
Databases
Creating a Database
Deleting a Database
Viewing a Specified Database
Viewing All Databases
Creating an OBS Table
Creating an OBS Table Using the DataSource Syntax
Creating an OBS Table Using the Hive Syntax
Creating a DLI Table
Creating a DLI Table Using the DataSource Syntax
Creating a DLI Table Using the Hive Syntax
Deleting a Table
Viewing Tables
Viewing All Tables
Viewing Table Creation Statements
Viewing Table Properties
Viewing All Columns in a Specified Table
Viewing All Partitions in a Specified Table
Viewing Table Statistics
Modifying a Table
Adding a Column
Enabling or Disabling Multiversion Backup
Syntax for Partitioning a Table
Adding Partition Data (Only OBS Tables Supported)
Renaming a Partition (Only OBS Tables Supported)
Deleting a Partition
Deleting Partitions by Specifying Filter Criteria (Only OBS Tables Supported)
Altering the Partition Location of a Table (Only OBS Tables Supported)
Updating Partitioned Table Data (Only OBS Tables Supported)
Updating Table Metadata with REFRESH TABLE
Importing Data to the Table
Inserting Data
Clearing Data
Exporting Search Results
Backing Up and Restoring Data of Multiple Versions
Setting the Retention Period for Multiversion Backup Data
Viewing Multiversion Backup Data
Restoring Multiversion Backup Data
Configuring the Trash Bin for Expired Multiversion Data
Deleting Multiversion Backup Data
Creating a Datasource Connection with an HBase Table
Creating a Table and Associating It with HBase
Inserting Data to an HBase Table
Querying an HBase Table
Creating a Datasource Connection with an OpenTSDB Table
Creating a Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Creating a Datasource Connection with a DWS table
Creating a Table and Associating It with DWS
Inserting Data to the DWS Table
Querying the DWS Table
Creating a Datasource Connection with an RDS Table
Creating a Table and Associating It with RDS
Inserting Data to the RDS Table
Querying the RDS Table
Creating a Datasource Connection with a CSS Table
Creating a Table and Associating It with CSS
Inserting Data to the CSS Table
Querying the CSS Table
Creating a Datasource Connection with a DCS Table
Creating a Table and Associating It with DCS
Inserting Data to a DCS Table
Querying the DCS Table
Creating a Datasource Connection with a DDS Table
Creating a Table and Associating It with DDS
Inserting Data to the DDS Table
Querying the DDS Table
Views
Creating a View
Deleting a View
Viewing the Execution Plan
Data Permissions Management
Data Permissions List
Creating a Role
Deleting a Role
Binding a Role
Unbinding a Role
Displaying a Role
Granting a Permission
Revoking a Permission
Displaying the Granted Permissions
Displaying the Binding Relationship Between All Roles and Users
Data Types
Overview
Primitive Data Types
Complex Data Types
User-Defined Functions
Creating a Function
Deleting a Function
Displaying Function Details
Displaying All Functions
Built-in Functions
Mathematical Functions
Date Functions
String Functions
Aggregate Functions
Window Functions
Basic SELECT Statements
Filtering
WHERE Filtering Clause
HAVING Filtering Clause
Sorting
ORDER BY
SORT BY
CLUSTER BY
DISTRIBUTE BY
Grouping
Column-Based GROUP BY
Expression-Based GROUP BY
GROUP BY Using HAVING
ROLLUP
GROUPING SETS
JOIN
INNER JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
FULL OUTER JOIN
IMPLICIT JOIN
Cartesian JOIN
LEFT SEMI JOIN
NON-EQUIJOIN
Subquery
Subquery Nested by WHERE
Subquery Nested by FROM
Subquery Nested by HAVING
Multi-Layer Nested Subquery
Alias
AS for Table
AS for Column
Set Operations
UNION
INTERSECT
EXCEPT
WITH...AS
CASE...WHEN
Basic CASE Statement
CASE Query Statement
OVER Clause
Flink SQL Syntax
SQL Syntax Constraints and Definitions
SQL Syntax Overview of Stream Jobs
Creating a Source Stream
DIS Source Stream
DMS Source Stream
MRS Kafka Source Stream
Open-Source Kafka Source Stream
OBS Source Stream
Creating a Sink Stream
MRS OpenTSDB Sink Stream
CSS Elasticsearch Sink Stream
DCS Sink Stream
DDS Sink Stream
DIS Sink Stream
DMS Sink Stream
DWS Sink Stream (JDBC Mode)
DWS Sink Stream (OBS-based Dumping)
MRS HBase Sink Stream
MRS Kafka Sink Stream
Open-Source Kafka Sink Stream
File System Sink Stream (Recommended)
OBS Sink Stream
RDS Sink Stream
SMN Sink Stream
Creating a Temporary Stream
Creating a Dimension Table
Creating a Redis Table
Creating an RDS Table
Custom Stream Ecosystem
Custom Source Stream
Custom Sink Stream
Data Type
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Type Conversion Functions
Aggregate Functions
Table-Valued Functions
Other Functions
User-Defined Functions
Geographical Functions
SELECT
Condition Expression
Window
JOIN Between Stream Data and Table Data
Configuring Time Models
Pattern Matching
StreamingML
Anomaly Detection
Time Series Forecasting
Real-Time Clustering
Deep Learning Model Prediction
Reserved Keywords
Identifiers
aggregate_func
alias
attr_expr
attr_expr_list
attrs_value_set_expr
boolean_expression
col
col_comment
col_name
col_name_list
condition
condition_list
cte_name
data_type
db_comment
db_name
else_result_expression
file_format
file_path
function_name
groupby_expression
having_condition
input_expression
join_condition
non_equi_join_condition
number
partition_col_name
partition_col_value
partition_specs
property_name
property_value
regex_expression
result_expression
select_statement
separator
sql_containing_cte_name
sub_query
table_comment
table_name
table_properties
table_reference
when_expression
where_condition
window_function
Operators
Relational Operators
Arithmetic Operators
Logical Operators
User Guide (Paris Region)
Overview
Quick Start
Submitting a SQL Job
Submitting a Spark Job
Submitting a Flink SQL Job
DLI Console Overview
SQL Editor
Job Management
SQL Job Management
Flink Job Management
Flink Job Management
Managing Flink Job Permissions
Preparing Data
Creating a Flink SQL job
Creating a Custom Flink Job
Debugging a Job
Performing Operations on a Job
Job Details
Spark Job Management
Spark Job Management
Creating a Spark Job
Queue Management
Overview
Queue Permission Management
Creating a Queue
Deleting a Queue
Modifying the CIDR Block
Modifying Queue Specifications (Manual Scale-out/Scale-in)
Scheduling CU Changes (Periodic Scale-out/Scale-in)
Testing Address Connectivity
Creating a Message Notification Topic
Data Management
Databases and Tables
Overview
Database Permission Management
Table Permission Management
Creating a Database or a Table
Deleting a Database or a Table
Modifying the Owners of Databases and Tables
Importing Data to the Table
Exporting Data from DLI to OBS
Viewing Metadata
Previewing Data
Package Management
Overview
Managing Permissions on Packages and Package Groups
Creating a Package
Deleting a Package
Modifying the Owner
Built-in Dependency Packages
Job Templates
SQL Template Management
Flink Template Management
Datasource Connections
Datasource Connection and Cross-Source Analysis
Enhanced Datasource Connections
Managing Datasource Connection Permissions
Datasource Authentication
Global Configuration
Global Variables
Service Authorization
Using UDFs
API Reference (Paris Region)
Before You Start
Overview
API Calling
Endpoints
Constraints
Basic Concepts
Overview
Calling APIs
Making an API Request
Authentication
Returned Values
Getting Started
Creating a Queue
Creating and Submitting a SQL Job
Creating and Submitting a Spark Job
Creating and Submitting a Flink Job
Creating and Using a Datasource Connection
Permission-related APIs
Granting Users with the Queue Usage Permission
Querying Queue Users
Granting Data Permission to Users
Querying Database Users
Querying Table Users
Querying a User's Table Permissions
Viewing the Granted Permissions of a User
Queue-related APIs (Recommended)
Creating a Queue
Deleting a Queue
Querying All Queues
Viewing Details of a Queue
Restarting, Scaling Out, and Scaling In Queues
Creating a Scheduled CU Change
Viewing a Scheduled CU Change
Deleting Scheduled CU Changes in Batches
Deleting a Scheduled CU Change
Modifying a Scheduled CU Change
APIs Related to SQL Jobs
Database-related APIs
Creating a Database
Deleting a Database
Querying All Databases
Modifying a Database Owner
Table-related APIs
Creating a Table
Deleting a Table
Querying All Tables (Recommended)
Describing the Table Information
Previewing Table Content
Obtaining the Partition List
Job-related APIs
Importing Data
Exporting Data
Submitting a SQL Job (Recommended)
Canceling a Job (Recommended)
Querying All Jobs
Previewing SQL Job Query Results
Querying Job Status
Querying Job Details
Checking SQL Syntax
Exporting Query Results
Querying the Job Execution Progress
Package Group-related APIs
Uploading a Package Group
Querying Package Group List
Uploading a JAR Package Group
Uploading a PyFile Package Group
Uploading a File Package Group
Querying Resource Packages in a Group
Deleting a Resource Package from a Group
Changing the Owner of a Group or Resource Package
APIs Related to Flink Jobs
Granting OBS Permissions to DLI
Creating a SQL Job
Updating a SQL Job
Creating a Flink Jar job
Updating a Flink Jar Job
Running Jobs in Batches
Querying the Job List
Querying Job Details
Querying the Job Execution Plan
Stopping Jobs in Batches
Deleting a Job
Deleting Jobs in Batches
Exporting a Flink Job
Importing a Flink Job
APIs Related to Spark jobs
Batch Processing-related APIs
Creating a Batch Processing Job
Canceling a Batch Processing Job
Obtaining the List of Batch Processing Jobs
Querying Batch Job Details
Querying a Batch Job Status
Querying Batch Job Logs
APIs Related to Flink Job Templates
Creating a Template
Updating a Template
Deleting a Template
Querying the Template List
APIs Related to Enhanced Datasource Connections
Creating an Enhanced Datasource Connection
Deleting an Enhanced Datasource Connection
Querying an Enhanced Datasource Connection List
Querying an Enhanced Datasource Connection
Binding a Queue
Unbinding a Queue
Modifying the Host Information
Querying Authorization of an Enhanced Datasource Connection
Global Variable-related APIs
Creating a Global Variable
Deleting a Global Variable
Modifying a Global Variable
Querying All Global Variables
Public Parameters
Status Codes
Error Code
Obtaining a Project ID
Obtaining an Account ID
Change History
SQL Syntax Reference (Paris Region)
Spark SQL Syntax Reference
Common Configuration Items of Batch SQL Jobs
SQL Syntax Overview of Batch Jobs
Databases
Creating a Database
Deleting a Database
Viewing a Specified Database
Viewing All Databases
Creating an OBS Table
Creating an OBS Table Using the DataSource Syntax
Creating an OBS Table Using the Hive Syntax
Creating a DLI Table
Creating a DLI Table Using the DataSource Syntax
Creating a DLI Table Using the Hive Syntax
Deleting a Table
Viewing Tables
Viewing All Tables
Viewing Table Creation Statements
Viewing Table Properties
Viewing All Columns in a Specified Table
Viewing All Partitions in a Specified Table
Viewing Table Statistics
Modifying a Table
Adding a Column
Enabling or Disabling Multiversion Backup
Syntax for Partitioning a Table
Adding Partition Data (Only OBS Tables Supported)
Renaming a Partition (Only OBS Tables Supported)
Deleting a Partition
Deleting Partitions by Specifying Filter Criteria (Only OBS Tables Supported)
Altering the Partition Location of a Table (Only OBS Tables Supported)
Updating Partitioned Table Data (Only OBS Tables Supported)
Updating Table Metadata with REFRESH TABLE
Importing Data to the Table
Inserting Data
Clearing Data
Exporting Search Results
Backing Up and Restoring Data of Multiple Versions
Setting the Retention Period for Multiversion Backup Data
Viewing Multiversion Backup Data
Restoring Multiversion Backup Data
Configuring the Trash Bin for Expired Multiversion Data
Deleting Multiversion Backup Data
Creating a Datasource Connection with an HBase Table
Creating a Table and Associating It with HBase
Inserting Data to an HBase Table
Querying an HBase Table
Creating a Datasource Connection with an OpenTSDB Table
Creating a Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Creating a Datasource Connection with a DWS table
Creating a Table and Associating It with DWS
Inserting Data to the DWS Table
Querying the DWS Table
Creating a Datasource Connection with an RDS Table
Creating a Table and Associating It with RDS
Inserting Data to the RDS Table
Querying the RDS Table
Creating a Datasource Connection with a CSS Table
Creating a Table and Associating It with CSS
Inserting Data to the CSS Table
Querying the CSS Table
Creating a Datasource Connection with a DCS Table
Creating a Table and Associating It with DCS
Inserting Data to a DCS Table
Querying the DCS Table
Creating a Datasource Connection with a DDS Table
Creating a Table and Associating It with DDS
Inserting Data to the DDS Table
Querying the DDS Table
Views
Creating a View
Deleting a View
Viewing the Execution Plan
Data Permissions Management
Data Permissions List
Creating a Role
Deleting a Role
Binding a Role
Unbinding a Role
Displaying a Role
Granting a Permission
Revoking a Permission
Displaying the Granted Permissions
Displaying the Binding Relationship Between All Roles and Users
Data Types
Overview
Primitive Data Types
Complex Data Types
User-Defined Functions
Creating a Function
Deleting a Function
Displaying Function Details
Displaying All Functions
Built-in Functions
Mathematical Functions
Date Functions
String Functions
Aggregate Functions
Window Functions
Basic SELECT Statements
Filtering
WHERE Filtering Clause
HAVING Filtering Clause
Sorting
ORDER BY
SORT BY
CLUSTER BY
DISTRIBUTE BY
Grouping
Column-Based GROUP BY
Expression-Based GROUP BY
GROUP BY Using HAVING
ROLLUP
GROUPING SETS
JOIN
INNER JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
FULL OUTER JOIN
IMPLICIT JOIN
Cartesian JOIN
LEFT SEMI JOIN
NON-EQUIJOIN
Subquery
Subquery Nested by WHERE
Subquery Nested by FROM
Subquery Nested by HAVING
Multi-Layer Nested Subquery
Alias
AS for Table
AS for Column
Set Operations
UNION
INTERSECT
EXCEPT
WITH...AS
CASE...WHEN
Basic CASE Statement
CASE Query Statement
OVER Clause
Flink SQL Syntax
SQL Syntax Constraints and Definitions
SQL Syntax Overview of Stream Jobs
Creating a Source Stream
DIS Source Stream
DMS Source Stream
MRS Kafka Source Stream
Open-Source Kafka Source Stream
OBS Source Stream
Creating a Sink Stream
MRS OpenTSDB Sink Stream
CSS Elasticsearch Sink Stream
DCS Sink Stream
DDS Sink Stream
DIS Sink Stream
DMS Sink Stream
DWS Sink Stream (JDBC Mode)
DWS Sink Stream (OBS-based Dumping)
MRS HBase Sink Stream
MRS Kafka Sink Stream
Open-Source Kafka Sink Stream
File System Sink Stream (Recommended)
OBS Sink Stream
RDS Sink Stream
SMN Sink Stream
Creating a Temporary Stream
Creating a Dimension Table
Creating a Redis Table
Creating an RDS Table
Custom Stream Ecosystem
Custom Source Stream
Custom Sink Stream
Data Type
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Type Conversion Functions
Aggregate Functions
Table-Valued Functions
Other Functions
User-Defined Functions
Geographical Functions
SELECT
Condition Expression
Window
JOIN Between Stream Data and Table Data
Configuring Time Models
Pattern Matching
StreamingML
Anomaly Detection
Time Series Forecasting
Real-Time Clustering
Deep Learning Model Prediction
Reserved Keywords
Identifiers
aggregate_func
alias
attr_expr
attr_expr_list
attrs_value_set_expr
boolean_expression
col
col_comment
col_name
col_name_list
condition
condition_list
cte_name
data_type
db_comment
db_name
else_result_expression
file_format
file_path
function_name
groupby_expression
having_condition
input_expression
join_condition
non_equi_join_condition
number
partition_col_name
partition_col_value
partition_specs
property_name
property_value
regex_expression
result_expression
select_statement
separator
sql_containing_cte_name
sub_query
table_comment
table_name
table_properties
table_reference
when_expression
where_condition
window_function
Operators
Relational Operators
Arithmetic Operators
Logical Operators
User Guide (Kuala Lumpur Region)
DLI Introduction
Getting Started
Creating and Submitting a Spark SQL Job
Developing and Submitting a Spark SQL Job Using the TPC-H Sample Template
Creating and Submitting a Spark Jar Job
Creating and Submitting a Flink SQL Job
DLI Console Overview
SQL Editor
Job Management
SQL Job Management
Flink Job Management
Overview
Managing Flink Job Permissions
Preparing Flink Job Data
Creating a Flink SQL job
Creating a Flink Jar Job
Debugging a Flink Job
Performing Operations on a Flink Job
Flink Job Details
Spark Job Management
Spark Job Management
Creating a Spark Job
Queue Management
Overview
Queue Permission Management
Creating a Queue
Deleting a Queue
Modifying the CIDR Block
Elastic Scaling
Scheduling CU Changes
Testing Address Connectivity
Creating a Message Notification Topic
Data Management
Databases and Tables
Overview
Database Permission Management
Table Permission Management
Creating a Database or a Table
Deleting a Database or a Table
Modifying the Owners of Databases and Tables
Importing Data to the Table
Exporting Data from DLI to OBS
Viewing Metadata
Previewing Data
Package Management
Overview
Managing Permissions on Packages and Package Groups
Creating a Package
Deleting a Package
Modifying the Owner
Built-in Dependencies
Job Templates
SQL Template Management
Flink Template Management
Appendix
TPC-H Sample Data in the SQL Template
Datasource Connections
Datasource Connection and Cross-Source Analysis
Enhanced Datasource Connections
Overview
Creating, Querying, and Deleting an Enhanced Datasource Connection
Binding and Unbinding a Queue
Modifying Host Information
Custom Route Information
Enhanced Datasource Connection Permission Management
Managing Datasource Connection Permissions
Creating and Managing Datasource Authentication
Global Configuration
Global Variables
Service Authorization
UDFs
Permissions Management
Overview
Creating an IAM User and Granting Permissions
Creating a Custom Policy
DLI Resources
DLI Request Conditions
Common Operations Supported by DLI System Policy
Change History
API Reference (Kuala Lumpur Region)
Before You Start
Overview
API Calling
Endpoints
Constraints
Basic Concepts
Overview
Calling APIs
Making an API Request
Authentication
Returned Values
Getting Started
Creating a Queue
Creating and Submitting a SQL Job
Creating and Submitting a Spark Job
Creating and Submitting a Flink Job
Creating and Using a Datasource Connection
Permission-related APIs
Granting Users with the Queue Usage Permission
Querying Queue Users
Granting Data Permission to Users
Querying Database Users
Querying Table Users
Querying a User's Table Permissions
Viewing the Granted Permissions of a User
Queue-related APIs (Recommended)
Creating a Queue
Deleting a Queue
Querying All Queues
Viewing Details of a Queue
Restarting, Scaling Out, and Scaling In Queues
Creating a Scheduled CU Change
Viewing a Scheduled CU Change
Deleting Scheduled CU Changes in Batches
Deleting a Scheduled CU Change
Modifying a Scheduled CU Change
APIs Related to SQL Jobs
Database-related APIs
Creating a Database
Deleting a Database
Querying All Databases
Modifying a Database Owner
Table-related APIs
Creating a Table
Deleting a Table
Querying All Tables (Recommended)
Describing the Table Information
Previewing Table Content
Obtaining the Partition List
Job-related APIs
Importing Data
Exporting Data
Submitting a SQL Job (Recommended)
Canceling a Job (Recommended)
Querying All Jobs
Previewing SQL Job Query Results
Querying Job Status
Querying Job Details
Checking SQL Syntax
Exporting Query Results
Querying the Job Execution Progress
Package Group-related APIs
Uploading a Package Group
Querying Package Group List
Uploading a JAR Package Group
Uploading a PyFile Package Group
Uploading a File Package Group
Querying Resource Packages in a Group
Deleting a Resource Package from a Group
Changing the Owner of a Group or Resource Package
APIs Related to Flink Jobs
Granting OBS Permissions to DLI
Creating a SQL Job
Updating a SQL Job
Creating a Flink Jar job
Updating a Flink Jar Job
Running Jobs in Batches
Querying the Job List
Querying Job Details
Querying the Job Execution Plan
Stopping Jobs in Batches
Deleting a Job
Deleting Jobs in Batches
Exporting a Flink Job
Importing a Flink Job
APIs Related to Spark jobs
Batch Processing-related APIs
Creating a Batch Processing Job
Canceling a Batch Processing Job
Obtaining the List of Batch Processing Jobs
Querying Batch Job Details
Querying a Batch Job Status
Querying Batch Job Logs
APIs Related to Flink Job Templates
Creating a Template
Updating a Template
Deleting a Template
Querying the Template List
APIs Related to Enhanced Datasource Connections
Creating an Enhanced Datasource Connection
Deleting an Enhanced Datasource Connection
Querying an Enhanced Datasource Connection List
Querying an Enhanced Datasource Connection
Binding a Queue
Unbinding a Queue
Modifying the Host Information
Querying Authorization of an Enhanced Datasource Connection
Global Variable-related APIs
Creating a Global Variable
Deleting a Global Variable
Modifying a Global Variable
Querying All Global Variables
Permissions Policies and Supported Actions
Public Parameters
Status Codes
Error Code
Obtaining a Project ID
Obtaining an Account ID
Change History
SQL Syntax Reference (Kuala Lumpur Region)
Spark SQL Syntax Reference
Common Configuration Items of Batch SQL Jobs
SQL Syntax Overview of Batch Jobs
Databases
Creating a Database
Deleting a Database
Viewing a Specified Database
Viewing All Databases
Creating an OBS Table
Creating an OBS Table Using the DataSource Syntax
Creating an OBS Table Using the Hive Syntax
Creating a DLI Table
Creating a DLI Table Using the DataSource Syntax
Creating a DLI Table Using the Hive Syntax
Deleting a Table
Viewing Tables
Viewing All Tables
Viewing Table Creation Statements
Viewing Table Properties
Viewing All Columns in a Specified Table
Viewing All Partitions in a Specified Table
Viewing Table Statistics
Modifying a Table
Adding a Column
Enabling or Disabling Multiversion Backup
Syntax for Partitioning a Table
Adding Partition Data (Only OBS Tables Supported)
Renaming a Partition (Only OBS Tables Supported)
Deleting a Partition
Deleting Partitions by Specifying Filter Criteria (Only OBS Tables Supported)
Altering the Partition Location of a Table (Only OBS Tables Supported)
Updating Partitioned Table Data (Only OBS Tables Supported)
Updating Table Metadata with REFRESH TABLE
Importing Data to the Table
Inserting Data
Clearing Data
Exporting Search Results
Backing Up and Restoring Data of Multiple Versions
Setting the Retention Period for Multiversion Backup Data
Viewing Multiversion Backup Data
Restoring Multiversion Backup Data
Configuring the Trash Bin for Expired Multiversion Data
Deleting Multiversion Backup Data
Creating a Datasource Connection with an HBase Table
Creating a DLI Table and Associating It with HBase
Inserting Data to an HBase Table
Querying an HBase Table
Creating a Datasource Connection with an OpenTSDB Table
Creating a DLI Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Creating a Datasource Connection with a DWS table
Creating a DLI Table and Associating It with DWS
Inserting Data to the DWS Table
Querying the DWS Table
Creating a Datasource Connection with an RDS Table
Creating a DLI Table and Associating It with RDS
Inserting Data to the RDS Table
Querying the RDS Table
Creating a Datasource Connection with a CSS Table
Creating a DLI Table and Associating It with CSS
Inserting Data to the CSS Table
Querying the CSS Table
Creating a Datasource Connection with a DCS Table
Creating a DLI Table and Associating It with DCS
Inserting Data to a DCS Table
Querying the DCS Table
Creating a Datasource Connection with a DDS Table
Creating a DLI Table and Associating It with DDS
Inserting Data to the DDS Table
Querying the DDS Table
Views
Creating a View
Deleting a View
Viewing the Execution Plan
Data Permissions Management
Data Permissions List
Creating a Role
Deleting a Role
Binding a Role
Unbinding a Role
Displaying a Role
Granting a Permission
Revoking a Permission
Displaying the Granted Permissions
Displaying the Binding Relationship Between All Roles and Users
Data Types
Overview
Primitive Data Types
Complex Data Types
User-Defined Functions
Creating a Function
Deleting a Function
Displaying Function Details
Displaying All Functions
Built-in Functions
Mathematical Functions
Date Functions
String Functions
Aggregate Functions
Window Functions
Basic SELECT Statements
Filtering
WHERE Filtering Clause
HAVING Filtering Clause
Sorting
ORDER BY
SORT BY
CLUSTER BY
DISTRIBUTE BY
Grouping
Column-Based GROUP BY
Expression-Based GROUP BY
GROUP BY Using HAVING
ROLLUP
GROUPING SETS
JOIN
INNER JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
FULL OUTER JOIN
IMPLICIT JOIN
Cartesian JOIN
LEFT SEMI JOIN
NON-EQUIJOIN
Subquery
Subquery Nested by WHERE
Subquery Nested by FROM
Subquery Nested by HAVING
Multi-Layer Nested Subquery
Alias
AS for Table
AS for Column
Set Operations
UNION
INTERSECT
EXCEPT
WITH...AS
CASE...WHEN
Basic CASE Statement
CASE Query Statement
OVER Clause
Flink SQL Syntax
SQL Syntax Constraints and Definitions
SQL Syntax Overview of Stream Jobs
Creating a Source Stream
CloudTable HBase Source Stream
DIS Source Stream
DMS Source Stream
MRS Kafka Source Stream
Open-Source Kafka Source Stream
OBS Source Stream
Creating a Sink Stream
CloudTable HBase Sink Stream
CloudTable OpenTSDB Sink Stream
MRS OpenTSDB Sink Stream
CSS Elasticsearch Sink Stream
DCS Sink Stream
DDS Sink Stream
DIS Sink Stream
DMS Sink Stream
DWS Sink Stream (JDBC Mode)
DWS Sink Stream (OBS-based Dumping)
MRS HBase Sink Stream
MRS Kafka Sink Stream
Open-Source Kafka Sink Stream
File System Sink Stream (Recommended)
OBS Sink Stream
RDS Sink Stream
SMN Sink Stream
Creating a Temporary Stream
Creating a Dimension Table
Creating a Redis Table
Creating an RDS Table
Custom Stream Ecosystem
Custom Source Stream
Custom Sink Stream
Data Type
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Type Conversion Functions
Aggregate Functions
Table-Valued Functions
Other Functions
User-Defined Functions
Geographical Functions
SELECT
Condition Expression
Window
JOIN Between Stream Data and Table Data
Configuring Time Models
Pattern Matching
StreamingML
Anomaly Detection
Time Series Forecasting
Real-Time Clustering
Deep Learning Model Prediction
Reserved Keywords
Identifiers
aggregate_func
alias
attr_expr
attr_expr_list
attrs_value_set_expr
boolean_expression
col
col_comment
col_name
col_name_list
condition
condition_list
cte_name
data_type
db_comment
db_name
else_result_expression
file_format
file_path
function_name
groupby_expression
having_condition
input_expression
join_condition
non_equi_join_condition
number
partition_col_name
partition_col_value
partition_specs
property_name
property_value
regex_expression
result_expression
select_statement
separator
sql_containing_cte_name
sub_query
table_comment
table_name
table_properties
table_reference
when_expression
where_condition
window_function
Operators
Relational Operators
Arithmetic Operators
Logical Operators
Videos
SQL Syntax Reference (To Be Offline)
Notice on Taking This Syntax Reference Offline
Spark SQL Syntax Reference (Unavailable Soon)
Common Configuration Items of Batch SQL Jobs
SQL Syntax Overview of Batch Jobs
Spark Open Source Commands
Databases
Creating a Database
Deleting a Database
Checking a Specified Database
Checking All Databases
Creating an OBS Table
Creating an OBS Table Using the DataSource Syntax
Creating an OBS Table Using the Hive Syntax
Creating a DLI Table
Creating a DLI Table Using the DataSource Syntax
Creating a DLI Table Using the Hive Syntax
Deleting a Table
Checking Tables
Checking All Tables
Checking Table Creation Statements
Checking Table Properties
Checking All Columns in a Specified Table
Checking All Partitions in a Specified Table
Checking Table Statistics
Modifying a Table
Adding a Column
Modifying Column Comments
Enabling or Disabling Multiversion Backup
Syntax for Partitioning a Table
Adding Partition Data (Only OBS Tables Supported)
Renaming a Partition (Only OBS Tables Supported)
Deleting a Partition
Deleting Partitions by Specifying Filter Criteria (Only Supported on OBS Tables)
Altering the Partition Location of a Table (Only OBS Tables Supported)
Updating Partitioned Table Data (Only OBS Tables Supported)
Updating Table Metadata with REFRESH TABLE
Importing Data to the Table
Inserting Data
Clearing Data
Exporting Search Results
Backing Up and Restoring Data of Multiple Versions
Setting the Retention Period for Multiversion Backup Data
Checking Multiversion Backup Data
Restoring Multiversion Backup Data
Configuring the Trash Bin for Expired Multiversion Data
Deleting Multiversion Backup Data
Table Lifecycle Management
Specifying the Lifecycle of a Table When Creating the Table
Modifying the Lifecycle of a Table
Disabling or Restoring the Lifecycle of a Table
Creating a Datasource Connection with an HBase Table
Creating a DLI Table and Associating It with HBase
Inserting Data to an HBase Table
Querying an HBase Table
Creating a Datasource Connection with an OpenTSDB Table
Creating a DLI Table and Associating It with OpenTSDB
Inserting Data to the OpenTSDB Table
Querying an OpenTSDB Table
Creating a Datasource Connection with a DWS table
Creating a DLI Table and Associating It with GaussDB(DWS)
Inserting Data to the DWS Table
Querying the DWS Table
Creating a Datasource Connection with an RDS Table
Creating a DLI Table and Associating It with RDS
Inserting Data to the RDS Table
Querying the RDS Table
Creating a Datasource Connection with a CSS Table
Creating a DLI Table and Associating It with CSS
Inserting Data to the CSS Table
Querying the CSS Table
Creating a Datasource Connection with a DCS Table
Creating a DLI Table and Associating It with DCS
Inserting Data to a DCS Table
Querying the DCS Table
Creating a Datasource Connection with a DDS Table
Creating a DLI Table and Associating It with DDS
Inserting Data to the DDS Table
Querying the DDS Table
Creating a Datasource Connection with an Oracle Table
Creating a DLI Table and Associating It with Oracle
Inserting Data to an Oracle Table
Querying an Oracle Table
Views
Creating a View
Deleting a View
Checking the Execution Plan
Data Permissions Management
Data Permissions List
Creating a Role
Deleting a Role
Binding a Role
Unbinding a Role
Displaying a Role
Granting a Permission
Revoking a Permission
Showing Granted Permissions
Displaying the Binding Relationship Between All Roles and Users
Data Types
Overview
Primitive Data Types
Complex Data Types
User-Defined Functions
Creating a Function
Deleting a Function
Displaying Function Details
Displaying All Functions
Built-in Functions
Date Functions
Overview
add_months
current_date
current_timestamp
date_add
dateadd
date_sub
date_format
datediff
datediff1
datepart
datetrunc
day/dayofmonth
from_unixtime
from_utc_timestamp
getdate
hour
isdate
last_day
lastday
minute
month
months_between
next_day
quarter
second
to_char
to_date
to_date1
to_utc_timestamp
trunc
unix_timestamp
weekday
weekofyear
year
String Functions
Overview
ascii
concat
concat_ws
char_matchcount
encode
find_in_set
get_json_object
instr
instr1
initcap
keyvalue
length
lengthb
levenshtein
locate
lower/lcase
lpad
ltrim
parse_url
printf
regexp_count
regexp_extract
replace
regexp_replace
regexp_replace1
regexp_instr
regexp_substr
repeat
reverse
rpad
rtrim
soundex
space
substr/substring
substring_index
split_part
translate
trim
upper/ucase
Mathematical Functions
Overview
abs
acos
asin
atan
bin
bround
cbrt
ceil
conv
cos
cot1
degrees
e
exp
factorial
floor
greatest
hex
least
ln
log
log10
log2
median
negative
percentlie
percentlie_approx
pi
pmod
positive
pow
radians
rand
round
shiftleft
shiftright
shiftrightunsigned
sign
sin
sqrt
tan
Aggregate Functions
Overview
avg
corr
count
covar_pop
covar_samp
max
min
percentile
percentile_approx
stddev_pop
stddev_samp
sum
variance/var_pop
var_samp
Window Functions
Overview
cume_dist
first_value
last_value
lag
lead
percent_rank
rank
row_number
Other Functions
Overview
decode1
javahash
max_pt
ordinal
trans_array
trunc_numeric
url_decode
url_encode
Basic SELECT Statements
Filtering
WHERE Filtering Clause
HAVING Filtering Clause
Sorting
ORDER BY
SORT BY
CLUSTER BY
DISTRIBUTE BY
Grouping
Column-Based GROUP BY
Expression-Based GROUP BY
GROUP BY Using HAVING
ROLLUP
GROUPING SETS
JOIN
INNER JOIN
LEFT OUTER JOIN
RIGHT OUTER JOIN
FULL OUTER JOIN
IMPLICIT JOIN
Cartesian JOIN
LEFT SEMI JOIN
NON-EQUIJOIN
Subquery
Subquery Nested by WHERE
Subquery Nested by FROM
Subquery Nested by HAVING
Multi-Layer Nested Subquery
Alias
AS for Table
AS for Column
Set Operations
UNION
INTERSECT
EXCEPT
WITH...AS
CASE...WHEN
Basic CASE Statement
CASE Query Statement
OVER Clause
Flink OpenSource SQL 1.12 Syntax Reference
Constraints and Definitions
Supported Data Types
Syntax
Data Definition Language (DDL)
CREATE TABLE
CREATE VIEW
CREATE FUNCTION
Data Manipulation Language (DML)
Overview
DDL Syntax
Creating Source Tables
DataGen Source Table
GaussDB(DWS) Source Table
HBase Source Table
JDBC Source Table
Kafka Source Table
MySQL CDC Source Table
Postgres CDC Source Table
Redis Source Table
Upsert Kafka Source Table
Creating Result Tables
BlackHole Result Table
ClickHouse Result Table
GaussDB(DWS) Result Table
Elasticsearch Result Table
HBase Result Table
JDBC Result Table
Kafka Result Table
Print Result Table
Redis Result Table
Upsert Kafka Result Table
FileSystem Result Table
Creating Dimension Tables
GaussDB(DWS) Dimension Table
HBase Dimension Table
JDBC Dimension Table
Redis Dimension Table
Format
Avro
Canal
Confluent Avro
CSV
Debezium
JSON
Maxwell
Raw
DML Snytax
SELECT
Set Operations
Window
JOIN
OrderBy & Limit
Top-N
Deduplication
Functions
User-Defined Functions (UDFs)
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Conditional Functions
Type Conversion Functions
Collection Functions
Value Construction Functions
Value Access Functions
Hash Functions
Aggregate Functions
Table-Valued Functions
string_split
Flink Opensource SQL 1.10 Syntax Reference
Constraints and Definitions
Supported Data Types
Syntax Definition
Data Definition Language (DDL)
CREATE TABLE
CREATE VIEW
CREATE FUNCTION
Data Manipulation Language (DML)
Flink OpenSource SQL 1.10 Syntax
Data Definition Language (DDL)
Creating a Source Table
Kafka Source Table
DIS Source Table
JDBC Source Table
GaussDB(DWS) Source Table
Redis Source Table
HBase Source Table
userDefined Source Table
Creating a Result Table
ClickHouse Result Table
Kafka Result Table
Upsert Kafka Result Table
DIS Result Table
JDBC Result Table
GaussDB(DWS) Result Table
Redis Result Table
SMN Result Table
HBase Result Table
Elasticsearch Result Table
OpenTSDB Result Table
User-defined Result Table
Print Result Table
File System Result Table
Creating a Dimension Table
JDBC Dimension Table
GaussDB(DWS) Dimension Table
HBase Dimension Table
Data Manipulation Language (DML)
SELECT
Set Operations
Window
JOIN
OrderBy & Limit
Top-N
Deduplication
Functions
User-Defined Functions
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Conditional Functions
Type Conversion Function
Collection Functions
Value Construction Functions
Value Access Functions
Hash Functions
Aggregate Function
Table-Valued Functions
split_cursor
string_split
Historical Versions (Unavailable Soon)
Flink SQL Syntax (This Syntax Will Not Evolve. Use FlinkOpenSource SQL Instead.)
SQL Syntax Constraints and Definitions
SQL Syntax Overview of Stream Jobs
Creating a Source Stream
CloudTable HBase Source Stream
DIS Source Stream
DMS Source Stream
MRS Kafka Source Stream
Open-Source Kafka Source Stream
OBS Source Stream
Creating a Sink Stream
CloudTable HBase Sink Stream
CloudTable OpenTSDB Sink Stream
MRS OpenTSDB Sink Stream
CSS Elasticsearch Sink Stream
DCS Sink Stream
DDS Sink Stream
DIS Sink Stream
DMS Sink Stream
DWS Sink Stream (JDBC Mode)
DWS Sink Stream (OBS-based Dumping)
MRS HBase Sink Stream
MRS Kafka Sink Stream
Open-Source Kafka Sink Stream
File System Sink Stream (Recommended)
OBS Sink Stream
RDS Sink Stream
SMN Sink Stream
Creating a Temporary Stream
Creating a Dimension Table
Creating a Redis Table
Creating an RDS Table
Custom Stream Ecosystem
Custom Source Stream
Custom Sink Stream
Data Type
Built-In Functions
Mathematical Operation Functions
String Functions
Temporal Functions
Type Conversion Functions
Aggregate Functions
Table-Valued Functions
Other Functions
User-Defined Functions
Geographical Functions
SELECT
Condition Expression
Window
JOIN Between Stream Data and Table Data
Configuring Time Models
Pattern Matching
StreamingML
Anomaly Detection
Time Series Forecasting
Real-Time Clustering
Deep Learning Model Prediction
Reserved Keywords
Identifiers
aggregate_func
alias
attr_expr
attr_expr_list
attrs_value_set_expr
boolean_expression
col
col_comment
col_name
col_name_list
condition
condition_list
cte_name
data_type
db_comment
db_name
else_result_expression
file_format
file_path
function_name
groupby_expression
having_condition
input_expression
join_condition
non_equi_join_condition
number
partition_col_name
partition_col_value
partition_specs
property_name
property_value
regex_expression
result_expression
select_statement
separator
sql_containing_cte_name
sub_query
table_comment
table_name
table_properties
table_reference
when_expression
where_condition
window_function
Operators
Relational Operators
Arithmetic Operators
Logical Operators
General Reference
Glossary
Service Level Agreement
White Papers
Endpoints
Permissions