Data Lake Insight
Data Lake Insight

    All results for "" in this service

      All results for "" in this service

      • What's New
        • What's New
      • Function Overview
      • Product Bulletin
        • Product Bulletin
          • EOS Announcement for DLI Spark 3.1.1
          • EOL Announcement for DLI Yearly/Monthly and Pay-per-Use Queues as Well as Queue CUH Packages
          • EOS Announcement for DLI Flink 1.10 and Flink 1.11
          • EOS Announcement for DLI Spark 2.3.2
          • EOS Announcement for DLI Flink 1.7
        • Version Support Bulletin
          • Lifecycle of DLI Compute Engine Versions
          • What's New in Flink 1.15
          • What's New in Flink 1.12
          • What's New in Spark 3.3.1
          • What's New in Spark 3.1.1
          • What's New in Spark 2.4.5
          • Differences Between Spark 2.4.x and Spark 3.3.x
            • Differences in SQL Queues Between Spark 2.4.x and Spark 3.3.x
            • Differences in General-Purpose Queues Between Spark 2.4.x and Spark 3.3.x
            • DLI Datasource V1 Table and Datasource V2 Table
      • Service Overview
        • Infographics
        • What Is Data Lake Insight
        • Advantages
        • Application Scenarios
        • Notes and Constraints
        • Specifications
        • Billing
        • Permissions Management
        • Quotas
        • Related Services
        • Basic Concepts
      • Billing
        • DLI Billing Overview
        • Elastic Resource Pools
        • Billing for Storage Resources
        • Billing for Scanned Data
        • Package Billing
        • Billing Examples
        • Renewing Subscriptions
        • Bills
        • Arrears
        • Billing Termination
        • Billing FAQ
          • What Billing Modes Does DLI Offer?
          • When Is a Data Lake Queue Idle?
          • How Do I Troubleshoot DLI Billing Issues?
          • Why Am I Still Being Billed on a Pay-per-Use Basis After I Purchased a Package?
          • How Do I View the Usage of a Package?
          • How Do I View a Job's Scanned Data Volume?
          • Would a Pay-Per-Use Elastic Resource Pool Not Be Billed if No Job Is Submitted for Execution?
          • Do I Need to Pay Extra Fees for Purchasing a Queue Billed Based on the Scanned Data Volume?
          • How Is the Usage Beyond the Package Limit Billed?
          • What Are the Actual CUs, CU Range, and Specifications of an Elastic Resource Pool?
      • Getting Started
        • Using DLI to Submit a SQL Job to Query OBS Data
        • Using DLI to Submit a SQL Job to Query RDS for MySQL Data
        • Using DLI to Submit a Flink OpenSource SQL Job to Query RDS for MySQL Data
        • Using DLI to Submit a Flink Jar Job
        • Using DLI to Submit a Spark Jar Job
        • Practices
      • User Guide
        • DLI Job Development Process
        • Preparations
          • Configuring DLI Agency Permissions
          • Creating an IAM User and Granting Permissions
          • Configuring a DLI Job Bucket
        • Creating an Elastic Resource Pool and Queues Within It
          • Overview of DLI Elastic Resource Pools and Queues
          • Creating an Elastic Resource Pool and Creating Queues Within It
          • Managing Elastic Resource Pools
            • Viewing Basic Information
            • Managing Permissions
            • Binding a Queue
            • Setting CUs
            • Modifying Specifications
            • Managing Tags
            • Adjusting Scaling Policies for Queues in an Elastic Resource Pool
            • Viewing Scaling History
            • Allocating to an Enterprise Project
          • Managing Queues
            • Viewing Basic Information About a Queue
            • Queue Permission Management
            • Allocating a Queue to an Enterprise Project
            • Creating an SMN Topic
            • Managing Queue Tags
            • Setting Queue Properties
            • Testing Address Connectivity
            • Deleting a Queue
            • Changing the Specifications for a Standard Queue
            • Auto Scaling of Standard Queues
            • Setting a Scheduled Auto Scaling Task for a Standard Queue
            • Changing the CIDR Block for a Standard Queue
          • Example Use Case: Creating an Elastic Resource Pool and Running Jobs
          • Example Use Case: Configuring Scaling Policies for Queues in an Elastic Resource Pool
          • Creating a Non-Elastic Resource Pool Queue (Discarded and Not Recommended)
        • Creating Databases and Tables
          • Understanding Data Catalogs, Databases, and Tables
          • Creating a Database and Table on the DLI Console
          • Viewing Table Metadata
          • Managing Database Resources on the DLI Console
            • Configuring Database Permissions on the DLI Console
            • Deleting a Database on the DLI Console
            • Changing the Database Owner on the DLI Console
            • Managing Tags
          • Managing Table Resources on the DLI Console
            • Configuring Table Permissions on the DLI Console
            • Deleting a Table on the DLI Console
            • Changing the Table Owner on the DLI Console
            • Importing OBS Data to DLI
            • Exporting DLI Table Data to OBS
            • Previewing Table Data on the DLI Console
          • Creating and Using LakeFormation Metadata
            • Connecting DLI to LakeFormation
            • Permission Policies and Supported Actions for LakeFormation Resources
        • Data Migration and Transmission
          • Overview
          • Migrating Data from External Data Sources to DLI
            • Overview of Data Migration Scenarios
            • Using CDM to Migrate Data to DLI
            • Example Typical Scenario: Migrating Data from Hive to DLI
            • Example Typical Scenario: Migrating Data from Kafka to DLI
            • Example Typical Scenario: Migrating Data from Elasticsearch to DLI
            • Example Typical Scenario: Migrating Data from RDS to DLI
            • Example Typical Scenario: Migrating Data from GaussDB(DWS) to DLI
          • Configuring DLI to Read and Write Data from and to External Data Sources
            • Configuring DLI to Read and Write External Data Sources
            • Configuring the Network Connection Between DLI and Data Sources (Enhanced Datasource Connection)
              • Overview of Enhanced Datasource Connections
              • Creating an Enhanced Datasource Connection
              • Common Development Methods for DLI Cross-Source Analysis
            • Using DEW to Manage Access Credentials for Data Sources
            • Using DLI Datasource Authentication to Manage Access Credentials for Data Sources
              • Overview
              • Creating a CSS Datasource Authentication
              • Creating a Kerberos Datasource Authentication
              • Creating a Kafka_SSL Datasource Authentication
              • Creating a Password Datasource Authentication
              • Datasource Authentication Permission Management
            • Managing Enhanced Datasource Connections
              • Viewing Basic Information About an Enhanced Datasource Connection
              • Enhanced Connection Permission Management
              • Binding or Unbinding an Enhanced Datasource Connection to a Queue
              • Adding a Route for an Enhanced Datasource Connection
              • Deleting the Route for an Enhanced Datasource Connection
              • Modifying Host Information in an Elastic Resource Pool
              • Enhanced Datasource Connection Tag Management
              • Deleting an Enhanced Datasource Connection
            • Example Typical Scenario: Connecting DLI to a Data Source on a Private Network
            • Example Typical Scenario: Connecting DLI to a Data Source on a Public Network
        • Configuring an Agency to Allow DLI to Access Other Cloud Services
          • DLI Agency Overview
          • Creating a Custom DLI Agency
          • Agency Permission Policies in Common Scenarios
          • Example of Configuring DLI Agency Permissions in Typical Scenarios
        • Submitting a SQL Job Using DLI
          • Creating and Submitting a SQL Job
          • Exporting SQL Job Results
          • Creating a SQL Inspection Rule
          • Setting the Priority for a SQL Job
          • Querying Logs for SQL Jobs
          • Managing SQL Jobs
          • Viewing a SQL Execution Plan
          • Creating and Managing SQL Job Templates
            • Creating a SQL Job Template
            • Developing and Submitting a SQL Job Using a SQL Job Template
            • TPC-H Sample Data in the SQL Templates Preset on DLI
        • Submitting a Flink Job Using DLI
          • Flink Job Overview
          • Creating a Flink OpenSource SQL Job
          • Creating a Flink Jar Job
          • Configuring Flink Job Permissions
          • Managing Flink Jobs
            • Viewing Flink Job Details
            • Setting the Priority for a Flink Job
            • Enabling Dynamic Scaling for Flink Jobs
            • Querying Logs for Flink Jobs
            • Common Operations of Flink Jobs
          • Managing Flink Job Templates
          • Adding Tags to a Flink Job
        • Submitting a Spark Job Using DLI
          • Creating a Spark Job
          • Setting the Priority for a Spark Job
          • Querying Logs for Spark Jobs
          • Managing Spark Jobs
          • Managing Spark Job Templates
        • Submitting a DLI Job Using a Notebook Instance
        • Using Cloud Eye to Monitor DLI
        • Using CTS to Audit DLI
        • Permissions Management
          • Overview
          • Creating a Custom Policy
          • DLI Resources
          • DLI Request Conditions
          • Common Operations Supported by DLI System Policy
        • Common DLI Management Operations
          • Using a Custom Image to Enhance the Job Running Environment
          • Managing DLI Global Variables
          • Managing Program Packages of Jar Jobs
            • Package Management Overview
            • Creating a DLI Package
            • Configuring DLI Package Permissions
            • Changing the DLI Package Owner
            • Managing DLI Package Tags
            • DLI Built-in Dependencies
          • Managing DLI Resource Quotas
      • Best Practices
        • Overview
        • Analyzing Driving Behavior Data in IoV Scenarios Using DLI
        • Converting Data Format from CSV to Parquet
        • Analyzing E-Commerce BI Reports Using DLI
        • Analyzing Billing Consumption Data Using DLI
        • Analyzing Real-time E-Commerce Business Data Using DLI
        • Connecting BI Tools to DLI for Data Analysis
          • Overview
          • Configuring DBeaver to Connect to DLI for Data Query and Analysis
          • Configuring DBT to Connect to DLI for Data Scheduling and Analysis
          • Configuring Grafana to Connect to DLI for Data Query and Analysis
          • Configuring Yonghong BI to Connect to DLI for Data Query and Analysis
          • Configuring Superset to Connect to DLI for Data Query and Analysis
          • Configuring Power BI to Connect to DLI for Data Query and Analysis
          • Configuring FineBI to Connect to DLI Using Kyuubi for Data Query and Analysis
          • Configuring Tableau to Connect to DLI Using Kyuubi for Data Query and Analysis
          • Configuring Beeline to Connect to DLI Using Kyuubi for Data Query and Analysis
        • Configuring DLI Queue Network Connectivity
          • Configuring the Connection Between a DLI Queue and a Data Source in a Private Network
          • Configuring the Connection Between a DLI Queue and a Data Source in the Internet
      • Developer Guide
        • Connecting to DLI Using a Client
          • Submitting a SQL Job Using JDBC
            • Downloading and Installing the JDBC Driver Package
            • Connecting to DLI and Submitting SQL Jobs Using JDBC
            • APIs Supported By the DLI JDBC Driver
          • Submitting a Spark Jar Job Using Livy
        • SQL Jobs
          • Using Spark SQL Jobs to Analyze OBS Data
          • Developing a DLI SQL Job in DataArts Studio
          • Calling UDFs in Spark SQL Jobs
          • Calling UDTFs in Spark SQL Jobs
          • Calling UDAFs in Spark SQL Jobs
        • Flink Jobs
          • Stream Ecosystem
          • Flink OpenSource SQL Jobs
            • Reading Data from Kafka and Writing Data to RDS
            • Reading Data from Kafka and Writing Data to GaussDB(DWS)
            • Reading Data from Kafka and Writing Data to Elasticsearch
            • Reading Data from MySQL CDC and Writing Data to GaussDB(DWS)
            • Reading Data from PostgreSQL CDC and Writing Data to GaussDB(DWS)
            • Configuring High-Reliability Flink Jobs (Automatic Restart upon Exceptions)
          • Flink Jar Job Examples
          • Writing Data to OBS Using Flink Jar
          • Using Flink Jar to Connect to Kafka that Uses SASL_SSL Authentication
          • Using Flink Jar to Read and Write Data from and to DIS
          • Flink Job Agencies
            • Flink OpenSource SQL Jobs Using DEW to Manage Access Credentials
            • Flink Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS
            • Obtaining Temporary Credentials from a Flink Job's Agency for Accessing Other Cloud Services
        • Spark Jar Jobs
          • Using Spark Jar Jobs to Read and Query OBS Data
          • Using the Spark Job to Access DLI Metadata
          • Using Spark Jobs to Access Data Sources of Datasource Connections
            • Overview
            • Connecting to CSS
              • CSS Security Cluster Configuration
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
            • Connecting to GaussDB(DWS)
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
            • Connecting to HBase
              • MRS Configuration
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
              • Troubleshooting
            • Connecting to OpenTSDB
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
              • Troubleshooting
            • Connecting to RDS
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
            • Connecting to Redis
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
              • Troubleshooting
            • Connecting to Mongo
              • Scala Example Code
              • PySpark Example Code
              • Java Example Code
          • Spark Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS
          • Obtaining Temporary Credentials from a Spark Job's Agency for Accessing Other Cloud Services
      • SQL Syntax Reference
        • Spark SQL Syntax Reference
          • Common Configuration Items
          • Spark SQL Syntax
          • Spark Open Source Commands
          • Databases
            • Creating a Database
            • Deleting a Database
            • Viewing a Specified Database
            • Viewing All Databases
          • Tables
            • Creating an OBS Table
              • Creating an OBS Table Using the DataSource Syntax
              • Creating an OBS Table Using the Hive Syntax
            • Creating a DLI Table
              • Creating a DLI Table Using the DataSource Syntax
              • Creating a DLI Table Using the Hive Syntax
            • Deleting a Table
            • Viewing a Table
              • Viewing All Tables
              • Viewing Table Creation Statements
              • Viewing Table Properties
              • Viewing All Columns in a Specified Table
              • Viewing All Partitions in a Specified Table
              • Viewing Table Statistics
            • Modifying a Table
              • Adding a Column
              • Modifying Column Comments
              • Enabling or Disabling Data Multi-Versioning (Deprecated, Not Recommended)
            • Partition-related Syntax
              • Adding Partition Data (Only OBS Tables Supported)
              • Renaming a Partition (Only OBS Tables Supported)
              • Deleting a Partition
              • Deleting Partitions by Specifying Filter Criteria (Only Supported on OBS Tables)
              • Altering the Partition Location of a Table (Only OBS Tables Supported)
              • Updating Partitioned Table Data (Only OBS Tables Supported)
              • Updating Table Metadata with REFRESH TABLE
            • Backing Up and Restoring Multi-Versioning Data (Deprecated, Not Recommended)
              • Setting the Retention Period of Multi-Versioning Backup Data (Deprecated, Not Recommended)
              • Viewing Multi-Versioning Backup Data (Deprecated, Not Recommended)
              • Restoring Multi-Versioning Backup Data (Deprecated, Not Recommended)
              • Configuring the Recycle Bin for Expired Multi-Versioning Data (Deprecated, Not Recommended)
              • Clearing Multi-Versioning Data (Deprecated, Not Recommended)
            • Table Lifecycle Management
              • Specifying the Lifecycle of a Table When Creating the Table
              • Modifying the Lifecycle of a Table
              • Disabling or Restoring the Lifecycle of a Table
          • Data
            • Importing Data
            • Inserting Data
            • Reusing Results of Subqueries
            • Clearing Data
          • Exporting Query Results
          • Datasource Connections
            • Creating a Datasource Connection with an HBase Table
              • Creating a DLI Table and Associating It with HBase
              • Inserting Data to an HBase Table
              • Querying an HBase Table
            • Creating a Datasource Connection with an OpenTSDB Table
              • Creating a DLI Table and Associating It with OpenTSDB
              • Inserting Data to the OpenTSDB Table
              • Querying an OpenTSDB Table
            • Creating a Datasource Connection with a DWS Table
              • Creating a DLI Table and Associating It with DWS
              • Inserting Data to the DWS Table
              • Querying the DWS Table
            • Creating a Datasource Connection with an RDS Table
              • Creating a DLI Table and Associating It with RDS
              • Inserting Data to the RDS Table
              • Querying the RDS Table
            • Creating a Datasource Connection with a CSS Table
              • Creating a DLI Table and Associating It with CSS
              • Inserting Data to the CSS Table
              • Querying the CSS Table
            • Creating a Datasource Connection with a DCS Table
              • Creating a DLI Table and Associating It with DCS
              • Inserting Data to a DCS Table
              • Querying the DCS Table
            • Creating a Datasource Connection with a DDS Table
              • Creating a DLI Table and Associating It with DDS
              • Inserting Data to the DDS Table
              • Querying the DDS Table
            • Creating a Datasource Connection with an Oracle Table
              • Creating a DLI Table and Associating It with Oracle
              • Inserting Data to an Oracle Table
              • Querying an Oracle Table
          • Views
            • Creating a View
            • Deleting a View
          • Viewing the Execution Plan
          • Data Permissions
            • Data Permissions List
            • Creating a Role
            • Deleting a Role
            • Binding a Role
            • Unbinding a Role
            • Displaying a Role
            • Granting a Permission
            • Revoking a Permission
            • Displaying the Granted Permissions
            • Displaying the Binding Relationship Between All Roles and Users
          • Data Types
            • Overview
            • Primitive Data Types
            • Complex Data Types
          • User-Defined Functions
            • Creating a Function
            • Deleting a Function
            • Displaying Function Details
            • Displaying All Functions
          • Built-In Functions
            • Date Functions
              • Overview
              • add_months
              • current_date
              • current_timestamp
              • date_add
              • dateadd
              • date_sub
              • date_format
              • datediff
              • datediff1
              • datepart
              • datetrunc
              • day/dayofmonth
              • from_unixtime
              • from_utc_timestamp
              • getdate
              • hour
              • isdate
              • last_day
              • lastday
              • minute
              • month
              • months_between
              • next_day
              • quarter
              • second
              • to_char
              • to_date
              • to_date1
              • to_utc_timestamp
              • trunc
              • unix_timestamp
              • weekday
              • weekofyear
              • year
            • String Functions
              • Overview
              • ascii
              • concat
              • concat_ws
              • char_matchcount
              • encode
              • find_in_set
              • get_json_object
              • instr
              • instr1
              • initcap
              • keyvalue
              • length
              • lengthb
              • levenshtein
              • locate
              • lower/lcase
              • lpad
              • ltrim
              • parse_url
              • printf
              • regexp_count
              • regexp_extract
              • replace
              • regexp_replace
              • regexp_replace1
              • regexp_instr
              • regexp_substr
              • repeat
              • reverse
              • rpad
              • rtrim
              • soundex
              • space
              • substr/substring
              • substring_index
              • split_part
              • translate
              • trim
              • upper/ucase
            • Mathematical Functions
              • Overview
              • abs
              • acos
              • asin
              • atan
              • bin
              • bround
              • cbrt
              • ceil
              • conv
              • cos
              • cot1
              • degrees
              • e
              • exp
              • factorial
              • floor
              • greatest
              • hex
              • least
              • ln
              • log
              • log10
              • log2
              • median
              • negative
              • percentile
              • percentile_approx
              • pi
              • pmod
              • positive
              • pow
              • radians
              • rand
              • round
              • shiftleft
              • shiftright
              • shiftrightunsigned
              • sign
              • sin
              • sqrt
              • tan
            • Aggregate Functions
              • Overview
              • avg
              • corr
              • count
              • covar_pop
              • covar_samp
              • max
              • min
              • percentile
              • percentile_approx
              • stddev_pop
              • stddev_samp
              • sum
              • variance/var_pop
              • var_samp
            • Window Functions
              • Overview
              • cume_dist
              • first_value
              • last_value
              • lag
              • lead
              • percent_rank
              • rank
              • row_number
            • Other Functions
              • Overview
              • decode1
              • javahash
              • max_pt
              • ordinal
              • trans_array
              • trunc_numeric
              • url_decode
              • url_encode
          • SELECT
            • Basic Statements
            • Sort
              • ORDER BY
              • SORT BY
              • CLUSTER BY
              • DISTRIBUTE BY
            • Grouping
              • Column-Based GROUP BY
              • Expression-Based GROUP BY
              • Using HAVING in GROUP BY
              • ROLLUP
              • GROUPING SETS
            • Joins
              • INNER JOIN
              • LEFT OUTER JOIN
              • RIGHT OUTER JOIN
              • FULL OUTER JOIN
              • IMPLICIT JOIN
              • Cartesian JOIN
              • LEFT SEMI JOIN
              • NON-EQUIJOIN
            • Clauses
              • FROM
              • OVER
              • WHERE
              • HAVING
              • Multi-Layer Nested Subquery
            • Alias
              • Table Alias
              • Column Alias
            • Set Operations
              • UNION
              • INTERSECT
              • EXCEPT
            • WITH...AS
            • CASE...WHEN
              • Basic CASE Statement
              • CASE Query Statement
          • Identifiers
            • aggregate_func
            • alias
            • attr_expr
            • attr_expr_list
            • attrs_value_set_expr
            • boolean_expression
            • class_name
            • col
            • col_comment
            • col_name
            • col_name_list
            • condition
            • condition_list
            • cte_name
            • data_type
            • db_comment
            • db_name
            • else_result_expression
            • file_format
            • file_path
            • function_name
            • groupby_expression
            • having_condition
            • hdfs_path
            • input_expression
            • input_format_classname
            • jar_path
            • join_condition
            • non_equi_join_condition
            • number
            • num_buckets
            • output_format_classname
            • partition_col_name
            • partition_col_value
            • partition_specs
            • property_name
            • property_value
            • regex_expression
            • result_expression
            • row_format
            • select_statement
            • separator
            • serde_name
            • sql_containing_cte_name
            • sub_query
            • table_comment
            • table_name
            • table_properties
            • table_reference
            • view_name
            • view_properties
            • when_expression
            • where_condition
            • window_function
          • Operators
            • Relational Operators
            • Arithmetic Operators
            • Logical Operators
        • Flink SQL Syntax Reference
          • Flink OpenSource SQL Syntax Reference
          • Flink OpenSource SQL 1.15 Syntax Reference
            • Constraints and Definitions
              • Supported Data Types
              • Reserved Keywords
              • Data Definition Language (DDL)
                • CREATE TABLE
                • CREATE CATALOG
                • CREATE DATABASE
                • CREATE VIEW
                • CREATE FUNCTION
              • Data Manipulation Language (DML)
            • Overview
            • Flink OpenSource SQL 1.15 Usage
            • Formats
              • Overview
              • Avro
              • Canal
              • Confluent Avro
              • CSV
              • Debezium
              • JSON
              • Maxwell
              • Ogg
              • ORC
              • Parquet
              • Raw
            • Connectors
              • Overview
              • BlackHole
              • ClickHouse
              • DataGen
              • Doris
                • Overview
                • Source Table
                • Result Table
                • Dimension Table
              • GaussDB(DWS)
                • Overview
                • GaussDB(DWS) Source Table (Not Recommended)
                • GaussDB(DWS) Result Table (Not Recommended)
                • GaussDB(DWS) Dimension Table (Not Recommended)
              • Elasticsearch
              • OBS
                • OBS Source Table
                • OBS Result Table
              • HBase
                • Source Table
                • HBase Result Table
                • Dimension Table
              • Hive
                • Creating a Hive Catalog
                • Hive Dialect
                • Hive Source Table
                • Result Table
                • Hive Dimension Table
                • Using Temporal Join to Associate the Latest Partition of a Dimension Table
                • Using Temporal Join to Associate the Latest Version of a Dimension Table
              • Hudi
                • Hudi Source Table
                • Hudi Result Table
              • JDBC
              • Kafka
              • MySQL CDC
              • Print
              • Redis
                • Source Table
                • Result Table
                • Dimension Table
              • Upsert Kafka
            • DML Snytax
              • SELECT
              • INSERT INTO
              • Set Operations
              • Window
                • Window Functions
                • Window Aggregation
                • Window Top-N
                • Window Deduplication
                • Window Join
              • Group Aggregation
              • Over Aggregation
              • JOIN
              • OrderBy & Limit
              • Top-N
              • Deduplication
            • Functions
              • UDFs
              • Type Inference
              • Parameter Transfer
              • Built-In Functions
                • Comparison Functions
                • Logical Functions
                • Arithmetic Functions
                • String Functions
                • Temporal Functions
                • Conditional Functions
                • Type Conversion Functions
                • Collection Functions
                • JSON Functions
                • Value Construction Functions
                • Value Retrieval Functions
                • Grouping Functions
                • Hash Functions
                • Aggregate Functions
                • Table-Valued Functions
                  • string_split
          • Flink OpenSource SQL 1.12 Syntax Reference
            • Constraints and Definitions
              • Supported Data Types
              • Syntax
                • Data Definition Language (DDL)
                  • CREATE TABLE
                  • CREATE VIEW
                  • CREATE FUNCTION
                • Data Manipulation Language (DML)
            • Overview
            • DDL Syntax
              • Creating Source Tables
                • DataGen Source Table
                • GaussDB(DWS) Source Table
                • HBase Source Table
                • JDBC Source Table
                • Kafka Source Table
                • MySQL CDC Source Table
                • Postgres CDC Source Table
                • Redis Source Table
                • Upsert Kafka Source Table
                • FileSystem Source Table
              • Creating Result Tables
                • BlackHole Result Table
                • ClickHouse Result Table
                • GaussDB(DWS) Result Table
                • Elasticsearch Result Table
                • HBase Result Table
                • JDBC Result Table
                • Kafka Result Table
                • Print Result Table
                • Redis Result Table
                • Upsert Kafka Result Table
                • FileSystem Result Table
              • Creating Dimension Tables
                • GaussDB(DWS) Dimension Table
                • HBase Dimension Table
                • JDBC Dimension Table
                • Redis Dimension Table
              • Format
                • Avro
                • Canal
                • Confluent Avro
                • CSV
                • Debezium
                • JSON
                • Maxwell
                • Raw
            • DML Snytax
              • SELECT
              • Set Operations
              • Window
              • JOIN
              • OrderBy & Limit
              • Top-N
              • Deduplication
            • Functions
              • User-Defined Functions (UDFs)
              • Type Inference
              • Parameter Transfer
              • Built-In Functions
                • Mathematical Operation Functions
                • String Functions
                • Temporal Functions
                • Conditional Functions
                • Type Conversion Functions
                • Collection Functions
                • Value Construction Functions
                • Value Access Functions
                • Hash Functions
                • Aggregate Functions
                • Table-Valued Functions
                  • string_split
          • Flink Opensource SQL 1.10 Syntax Reference
            • Constraints and Definitions
              • Supported Data Types
              • Syntax Definition
                • Data Definition Language (DDL)
                  • CREATE TABLE
                  • CREATE VIEW
                  • CREATE FUNCTION
                • Data Manipulation Language (DML)
            • Flink OpenSource SQL 1.10 Syntax
            • Data Definition Language (DDL)
              • Creating a Source Table
                • Kafka Source Table
                • DIS Source Table
                • JDBC Source Table
                • GaussDB(DWS) Source Table
                • Redis Source Table
                • HBase Source Table
                • userDefined Source Table
              • Creating a Result Table
                • ClickHouse Result Table
                • Kafka Result Table
                • Upsert Kafka Result Table
                • DIS Result Table
                • JDBC Result Table
                • GaussDB(DWS) Result Table
                • Redis Result Table
                • SMN Result Table
                • HBase Result Table
                • Elasticsearch Result Table
                • OpenTSDB Result Table
                • User-defined Result Table
                • Print Result Table
                • File System Result Table
              • Creating a Dimension Table
                • JDBC Dimension Table
                • GaussDB(DWS) Dimension Table
                • HBase Dimension Table
            • Data Manipulation Language (DML)
              • SELECT
              • Set Operations
              • Window
              • JOIN
              • OrderBy & Limit
              • Top-N
              • Deduplication
            • Functions
              • User-Defined Functions
              • Built-In Functions
                • Mathematical Operation Functions
                • String Functions
                • Temporal Functions
                • Conditional Functions
                • Type Conversion Function
                • Collection Functions
                • Value Construction Functions
                • Value Access Functions
                • Hash Functions
                • Aggregate Function
                • Table-Valued Functions
                  • split_cursor
                  • string_split
        • Hudi SQL Syntax Reference
          • Hudi Table Overview
            • Constraints on Hudi Tables
            • Query Type
            • Storage Structure
          • DLI Hudi Metadata
          • DLI Hudi Development Specifications
            • Overview
            • Hudi Data Table Design Specifications
              • Hudi Table Model Design Specifications
              • Hudi Table Index Design Specifications
              • Hudi Table Partition Design Specifications
            • Hudi Data Table Management Operation Specifications
              • Hudi Data Table Compaction Specifications
              • Hudi Data Table Clean Specifications
              • Hudi Data Table Archive Specifications
            • Spark on Hudi Development Specifications
              • Parameter Specifications for Creating a Hudi Table with SparkSQL
              • Parameter Specifications for Incremental Reading of Hudi Table Data with Spark
              • Parameter Specifications for Spark Asynchronous Task Execution Table Compaction
              • Spark Table Data Maintenance Specifications
            • Bucket Tuning
              • Tuning a Bucket Index Table
              • Hudi Table Initialization
              • Real-Time Job Ingestion
              • Offline Compaction Configuration
          • Using Hudi to Develop Jobs in DLI
            • Submitting a Spark SQL Job in DLI Using Hudi
            • Submitting a Spark Jar Job in DLI Using Hudi
            • Submitting a Flink SQL Job in DLI Using Hudi
            • Using HetuEngine on Hudi
          • DLI Hudi SQL Syntax Reference
            • Hudi DDL Syntax
              • CREATE TABLE
              • DROP TABLE
              • SHOW TABLE
              • TRUNCATE TABLE
            • Hudi DML Syntax
              • CREATE TABLE AS SELECT
              • INSERT INTO
              • MERGE INTO
              • UPDATE
              • DELETE
              • COMPACTION
              • ARCHIVELOG
              • CLEAN
              • CLEANARCHIVE
            • Hudi CALL COMMAND Syntax
              • CLEAN_FILE
              • SHOW_TIME_LINE
              • SHOW_HOODIE_PROPERTIES
              • ROLL_BACK
              • CLUSTERING
              • CLEANING
              • COMPACTION
              • SHOW_COMMIT_FILES
              • SHOW_FS_PATH_DETAIL
              • SHOW_LOG_FILE
              • SHOW_INVALID_PARQUET
            • Schema Evolution Syntax
              • ALTER COLUMN
              • ADD COLUMNS
              • RENAME COLUMN
              • RENAME TABLE
              • SET
              • DROP COLUMN
            • Configuring Default Values for Hudi Data Columns
          • Spark DataSource API Syntax Reference
            • API Syntax Description
            • Hudi Lock Configuration
          • Data Management and Maintenance
            • Hudi Compaction
            • Hudi Clean
            • Hudi Archive
            • Hudi Clustering
          • Typical Hudi Configuration Parameters
        • Delta SQL Syntax Reference
          • DLI Delta Table Overview
          • Using Delta to Develop Jobs in DLI
            • DLI Delta Metadata
            • Using Delta to Submit a Spark Jar Job in DLI
          • Delta Time Travel
            • Viewing History Operation Records of a Delta Table
            • Querying History Version Data of a Delta Table
            • Restoring a Delta Table to an Earlier State
          • Delta Cleansing and Optimization
          • Delta SQL Syntax Reference
            • Delta DDL Syntax
              • CREATE TABLE
              • DROP TABLE
              • DESCRIBE
              • ADD CONSTRAINT
              • DROP CONSTRAINT
              • CONVERT TO DELTA
              • SHALLOW CLONE
            • Delta DML Syntax
              • INSERT
              • CREATE TABLE AS SELECT
              • MERGE INTO
              • UPDATE
              • DELETE
              • VACUUM
              • RESTORE
              • OPTIMIZE
            • Schema Evolution Syntax
              • ALTER COLUMN
              • ADD COLUMNS
              • RENAME COLUMN
              • RENAME TABLE
              • DROP COLUMN
          • Typical Delta Configurations
          • DLI Delta FAQ
      • HetuEngine SQL Syntax Reference
        • HetuEngine SQL Syntax
          • Before You Start
          • Data Type
            • Data Types
            • Boolean
            • Integer
            • Fixed Precision
            • Float
            • Character
            • Time and Date Type
            • Complex Type
          • DDL Syntax
            • CREATE SCHEMA
            • CREATE TABLE
            • CREATE TABLE AS
            • CREATE TABLE LIKE
            • CREATE VIEW
            • ALTER TABLE
            • ALTER VIEW
            • ALTER SCHEMA
            • DROP SCHEMA
            • DROP TABLE
            • DROP VIEW
            • TRUNCATE TABLE
            • COMMENT
            • VALUES
            • SHOW Syntax Overview
            • SHOW SCHEMAS (DATABASES)
            • SHOW TABLES
            • SHOW TBLPROPERTIES TABLE|VIEW
            • SHOW TABLE/PARTITION EXTENDED
            • SHOW STATS
            • SHOW FUNCTIONS
            • SHOW PARTITIONS
            • SHOW COLUMNS
            • SHOW CREATE TABLE
            • SHOW VIEWS
            • SHOW CREATE VIEW
          • DML Syntax
            • INSERT
          • DQL Syntax
            • SELECT
            • WITH
            • GROUP BY
            • HAVING
            • UNION | INTERSECT | EXCEPT
            • ORDER BY
            • OFFSET
            • LIMIT | FETCH FIRST
            • TABLESAMPLE
            • UNNEST
            • JOINS
            • Subqueries
            • SELECT VIEW CONTENT
          • Auxiliary Command Syntax
            • DESCRIBE
            • DESCRIBE FORMATTED COLUMNS
            • DESCRIBE DATABASE| SCHEMA
            • EXPLAIN
            • ANALYZE
          • Reserved Keywords
          • SQL Functions and Operators
            • Logical Operators
            • Comparison Functions and Operators
            • Condition Expression
            • Lambda Expression
            • Conversion Functions
            • Mathematical Functions and Operators
            • Bitwise Functions
            • Decimal Functions and Operators
            • String Functions and Operators
            • Regular Expressions
            • Binary Functions and Operators
            • JSON Functions and Operators
            • Date and Time Functions and Operators
            • Aggregate Functions
            • Window Functions
            • Array Functions and Operators
            • Map Functions and Operators
            • URL Function
            • UUID Function
            • Color Function
            • Teradata Function
            • Data Masking Functions
            • IP Address Functions
            • Quantile Digest Functions
            • T-Digest Functions
        • Implicit Data Type Conversion
          • Introduction
          • Implicit Conversion Table
        • Appendix
          • Data preparation for the sample table in this document
          • Syntax Compatibility of Common Data Sources
      • API Reference
        • Before You Start
          • Overview
          • API Calling
          • Endpoints
          • Notes and Constraints
          • Basic Concepts
        • Overview
        • Calling APIs
          • Making an API Request
          • Authentication
          • Returned Values
        • Getting Started
          • Creating and Submitting a SQL Job
          • Creating and Submitting a Spark Job
          • Creating and Submitting a Flink Job
          • Creating and Using a Datasource Connection
        • Permission-related APIs
          • Granting Data Access Control to Users or Projects
          • Checking the Permissions Granted to a User
        • Global Variable-related APIs
          • Creating a Global Variable
          • Deleting a Global Variable
          • Modifying a Global Variable
          • Querying All Global Variables
        • APIs Related to Resource Tags
          • Batch Adding Resource Tags
          • Batch Deleting Resource Tags
          • Querying the Number of Resource Instances
          • Listing Resource Instances
          • Querying Tags of a Specified Resource Type
          • Querying Tags of a Specified Resource Instance
        • APIs Related to Enhanced Datasource Connections
          • Creating an Enhanced Datasource Connection
          • Deleting an Enhanced Datasource Connection
          • Listing Enhanced Datasource Connections
          • Querying an Enhanced Datasource Connection
          • Binding a Queue
          • Unbinding a Queue
          • Modifying Host Information
          • Querying Authorization of an Enhanced Datasource Connection
        • APIs Related to Elastic Resource Pools
          • Creating an Elastic Resource Pool
          • Querying All Elastic Resource Pools
          • Deleting an Elastic Resource Pool
          • Modifying Elastic Resource Pool Information
          • Querying All Queues in an Elastic Resource Pool
          • Associating a Queue with an Elastic Resource Pool
          • Viewing Scaling History of an Elastic Resource Pool
          • Modifying the Scaling Policy of a Queue Associated with an Elastic Resource Pool
        • Queue-related APIs (Recommended)
          • Creating a Queue
          • Deleting a Queue
          • Querying All Queues
          • Querying Queue Details
          • Restarting, Scaling Out, and Scaling In Queues
          • Creating an Address Connectivity Test Request
          • Querying Connectivity Test Details of a Specified Address
        • SQL Job-related APIs
          • Submitting a SQL Job (Recommended)
          • Canceling a Job (Recommended)
          • Querying All Jobs
          • Previewing SQL Job Query Results
          • Exporting Query Results
          • Querying Job Status
          • Querying Job Details
          • Checking SQL Syntax
          • Querying the Job Execution Progress
        • SQL Template-related APIs
          • Saving a SQL template
          • Checking All SQL Templates
          • Updating a SQL template
          • Deleting a SQL template
        • Flink Job-related APIs
          • Creating a SQL Job
          • Updating a SQL Job
          • Creating a Flink Jar job
          • Updating a Flink Jar Job
          • Running Jobs in Batches
          • Listing Jobs
          • Querying Job Details
          • Querying the Job Execution Plan
          • Batch Stopping Jobs
          • Deleting a Job
          • Batch Deleting Jobs
          • Exporting a Flink Job
          • Importing a Flink Job
          • Generating a Static Stream Graph for a Flink SQL Job
        • APIs Related to Flink Job Templates
          • Creating a Template
          • Updating a Template
          • Deleting a Template
          • Querying the Template List
        • Spark Job-related APIs
          • Creating a Batch Processing Job
          • Listing Batch Processing Jobs
          • Querying Batch Job Details
          • Querying a Batch Job Status
          • Canceling a Batch Processing Job
        • APIs Related to Spark Job Templates
          • Creating a Job Template
          • Listing Job Templates
          • Modifying a Job Template
          • Obtaining a Job Template
        • Permissions Policies and Supported Actions
        • Out-of-Date APIs
          • Agency-related APIs (Deprecated)
            • Obtaining DLI Agency Information (Deprecated)
            • Creating a DLI Agency (Deprecated)
          • Package Group-related APIs (Deprecated)
            • Uploading a Package Group (Deprecated)
            • Listing Package Groups (Deprecated)
            • Uploading a JAR Package Group (Deprecated)
            • Uploading a PyFile Package Group (Deprecated)
            • Uploading a File Package Group (Deprecated)
            • Querying Resource Packages in a Group (Deprecated)
            • Deleting a Resource Package from a Group (Deprecated)
            • Changing the Owner of a Group or Resource Package (Deprecated)
          • Spark Batch Processing-related APIs (Deprecated)
            • Querying Batch Job Logs (Deprecated)
          • SQL Job-related APIs (Deprecated)
            • Importing Data (Deprecated)
            • Exporting Data (Deprecated)
          • Resource-related APIs (Deprecated)
            • Database-related APIs (Deprecated)
              • Creating a Database (Deprecated)
              • Deleting a Database (Deprecated)
              • Querying All Databases (Deprecated)
              • Modifying a Database Owner (Deprecated)
            • Table-related APIs (Deprecated)
              • Creating a Table (Deprecated)
              • Deleting a Table (Deprecated)
              • Querying All Tables (Deprecated)
              • Describing Table Information (Deprecated)
              • Previewing Table Content (Deprecated)
              • Listing Partitions (Deprecated)
          • Permission-related APIs (Deprecated)
            • Granting Queue Permissions to a User (Deprecated)
            • Querying Queue Users (Deprecated)
            • Granting Data Permission to Users (Deprecated)
            • Querying Database Users (Deprecated)
            • Querying Table Users (Deprecated)
            • Querying a User's Table Permissions (Deprecated)
          • Queue-related APIs (Deprecated)
            • Creating a Scheduled CU Change (Deprecated)
            • Viewing a Scheduled CU Change (Deprecated)
            • Deleting Scheduled CU Changes in Batches (Deprecated)
            • Deleting a Scheduled CU Change (Deprecated)
            • Modifying a Scheduled CU Change (Deprecated)
          • Datasource Authentication-related APIs (Deprecated)
            • Creating Datasource Authentication (Deprecated)
            • Listing Datasource Authentication Information (Deprecated)
            • Updating Datasource Authentication (Deprecated)
            • Deleting Datasource Authentication (Deprecated)
          • Enhanced Datasource Connection-related APIs (Deprecated)
            • Creating a Route (Deprecated)
            • Deleting a Route (Deprecated)
          • Template-related APIs (Deprecated)
            • Querying All Sample SQL Templates (Deprecated)
          • Flink Job-related APIs (Deprecated)
            • Querying Job Monitoring Information (Deprecated)
            • Authorizing DLI to Access OBS (Deprecated)
        • Public Parameters
          • Status Codes
          • Error Codes
          • Obtaining a Project ID
          • Obtaining an Account ID
      • SDK Reference
        • Introduction to DLI SDKs
          • What Is DLI SDK
          • Content Navigation
        • Preparing the SDK Environment
          • Prerequisites
          • Configuring the Java Environment
          • Configuring the Python Environment
        • Mapping Between DLI SDKs and APIs
        • Java SDK
          • Instructions
          • Initializing the DLI Client
          • OBS Authorization
          • Queue-Related SDKs
          • Resource-Related SDKs
          • SDKs Related to SQL Jobs
            • Database-Related SDKs
            • Table-Related SDKs
            • Job-related SDKs
          • SDKs Related to Flink Jobs
          • SDKs Related to Spark Jobs
          • SDKs Related to Flink Job Templates
        • Python SDK
          • Instructions
          • Initializing the DLI Client
          • Queue-Related SDKs
          • Resource-Related SDKs
          • SDKs Related to SQL Jobs
            • Database-Related SDKs
            • Table-Related SDKs
            • Job-related SDKs
          • SDKs Related to Spark Jobs
        • Change History
      • FAQs
        • DLI Basics
          1. What Are the Differences Between DLI Flink and MRS Flink?
          2. What Are the Differences Between MRS Spark and DLI Spark?
          3. How Do I Upgrade the Engine Version of a DLI Job?
          4. Where Can Data Be Stored in DLI?
          5. Can I Import OBS Bucket Data Shared by Other Tenants into DLI?
          6. Can a Member Account Use Global Variables Created by Other Member Accounts?
          7. Is DLI Affected by the Apache Spark Command Injection Vulnerability (CVE-2022-33891)?
          8. How Do I Manage Jobs Running on DLI?
          9. How Do I Change the Field Names of an Existing Table on DLI?
        • DLI Elastic Resource Pools and Queues
          1. How Can I Check the Actual and Used CUs for an Elastic Resource Pool as Well as the Required CUs for a Job?
          2. How Do I Check for a Backlog of Jobs in the Current DLI Queue?
          3. How Do I View the Load of a DLI Queue?
          4. How Do I Monitor Job Exceptions on a DLI Queue?
          5. How Do I Migrate an Old Version Spark Queue to a General-Purpose Queue?
          6. How Do I Do If I Encounter a Timeout Exception When Executing DLI SQL Statements on the default Queue?
        • DLI Databases and Tables
          1. Why Am I Unable to Query a Table on the DLI Console?
          2. How Do I Do If the Compression Rate of an OBS Table Is High?
          3. How Do I Do If Inconsistent Character Encoding Leads to Garbled Characters?
          4. Do I Need to to Regrant Permissions to Users and Projects After Deleting and Recreating a Table With the Same Name?
          5. How Do I Do If Files Imported Into a DLI Partitioned Table Lack Data for the Partition Columns, Causing Query Failures After the Import Is Completed?
          6. How Do I Fix Incorrect Data in an OBS Foreign Table Caused by Newline Characters in OBS File Fields?
          7. How Do I Prevent a Cartesian Product Query and Resource Overload Due to Missing "ON" Conditions in Table Joins?
          8. How Do I Do If I Can't Query Data After Manually Adding It to the Partition Directory of an OBS Table?
          9. Why Does the "insert overwrite" Operation Affect All Data in a Partitioned Table Instead of Just the Targeted Partition?
          10. Why Does the "create_date" Field in an RDS Table (Datetime Data Type) Appear as a Timestamp in DLI Queries?
          11. How Do I Do If Renaming a Table After a SQL Job Causes Incorrect Data Size?
          12. How Can I Resolve Data Inconsistencies When Importing Data from DLI to OBS?
          13. Why Is a Hudi Table Not Displayed on the DLI Console?
        • Enhanced Datasource Connections
          1. How Do I Do If I Can't Bind an Enhanced Datasource Connection to a Queue?
          2. How Do I Resolve a Failure in Connecting DLI to GaussDB(DWS) Through an Enhanced Datasource Connection?
          3. How Do I Do If the Datasource Connection Is Successfully Created but the Network Connectivity Test Fails?
          4. How Do I Configure Network Connectivity Between a DLI Queue and a Data Source?
          5. Why Is Creating a VPC Peering Connection Necessary for Enhanced Datasource Connections in DLI?
          6. How Do I Do If Creating a Datasource Connection in DLI Gets Stuck in the "Creating" State When Binding It to a Queue?
          7. How Do I Resolve the "communication link failure" Error When Using a Newly Created Datasource Connection That Appears to Be Activated?
          8. How Do I Troubleshoot a Connection Timeout Issue That Isn't Recorded in Logs When Accessing MRS HBase Through a Datasource Connection?
          9. How Do I Fix the "Failed to get subnet" Error When Creating a Datasource Connection in DLI?
          10. How Do I Do If I Encounter the "Incorrect string value" Error When Executing insert overwrite on a Datasource RDS Table?
          11. How Do I Resolve the Null Pointer Error When Creating an RDS Datasource Table?
          12. Error Message "org.postgresql.util.PSQLException: ERROR: tuple concurrently updated" Is Displayed When the System Executes insert overwrite on a Datasource GaussDB(DWS) Table
          13. RegionTooBusyException Is Reported When Data Is Imported to a CloudTable HBase Table Through a Datasource Table
          14. How Do I Do If A Null Value Is Written Into a Non-Null Field When Using a DLI Datasource Connection to Connect to a GaussDB(DWS) Table?
          15. How Do I Do If an Insert Operation Failed After the Schema of the GaussDB(DWS) Source Table Is Updated?
          16. How Do I Insert Data into an RDS Table with an Auto-Increment Primary Key Using DLI?
        • SQL Jobs
          • SQL Job Development
            1. SQL Jobs
            2. How Do I Merge Small Files?
            3. How Do I Use DLI to Access Data in an OBS Bucket?
            4. How Do I Specify an OBS Path When Creating an OBS Table?
            5. How Do I Create a Table Using JSON Data in an OBS Bucket?
            6. How Can I Use the count Function to Perform Aggregation?
            7. How Do I Synchronize DLI Table Data Across Regions?
            8. How Do I Insert Table Data into Specific Fields of a Table Using a SQL Job?
            9. How Do I Troubleshoot Slow SQL Jobs?
            10. How Do I View DLI SQL Logs?
            11. How Do I View SQL Execution Records in DLI?
            12. How Do I Do When Data Skew Occurs During the Execution of a SQL Job?
            13. Why Does a SQL Job That Has Join Operations Stay in the Running State?
            14. Why Is a SQL Job Stuck in the Submitting State?
          • SQL Job O&M
            1. Why Is Error "path obs://xxx already exists" Reported When Data Is Exported to OBS?
            2. Why Is Error "SQL_ANALYSIS_ERROR: Reference 't.id' is ambiguous, could be: t.id, t.id.;" Displayed When Two Tables Are Joined?
            3. Why Is Error "The current account does not have permission to perform this operation,the current account was restricted. Restricted for no budget." Reported when a SQL Statement Is Executed?
            4. Why Is Error "There should be at least one partition pruning predicate on partitioned table XX.YYY" Reported When a Query Statement Is Executed?
            5. Why Is Error "IllegalArgumentException: Buffer size too small. size" Reported When Data Is Loaded to an OBS Foreign Table?
            6. Why Is Error "DLI.0002 FileNotFoundException" Reported During SQL Job Running?
            7. Why Is a Schema Parsing Error Reported When I Create a Hive Table Using CTAS?
            8. Why Is Error "org.apache.hadoop.fs.obs.OBSIOException" Reported When I Run DLI SQL Scripts on DataArts Studio?
            9. Why Is Error "File not Found" Reported When I Access a SQL Job?
            10. Why Is Error "DLI.0003: AccessControlException XXX" Reported When I Access a SQL Job?
            11. Why Is Error "DLI.0001: org.apache.hadoop.security.AccessControlException: verifyBucketExists on {{bucket name}}: status [403]" Reported When I Access a SQL Job?
            12. Why Am I Seeing the Error Message "The current account does not have permission to perform this operation,the current account was restricted. Restricted for no budget" When Executing a SQL Statement?
        • Flink Jobs
          • Flink Job Consulting
            1. How Do I Authorize a Subuser to View Flink Jobs?
            2. How Do I Configure Auto Restart upon Exception for a Flink Job?
            3. How Do I Save Logs for Flink Jobs?
            4. Why Is Error "No such user. userName:xxxx." Reported on the Flink Job Management Page When I Grant Permission to a User?
            5. How Do I Restore a Flink Job from a Specific Checkpoint After Manually Stopping the Job?
            6. Why Is a Message Displayed Indicating That the SMN Topic Does Not Exist When I Use the SMN Topic in DLI?
          • Flink SQL Jobs
            1. How Do I Map an OBS Table to a DLI Partitioned Table?
            2. How Do I Change the Number of Kafka Partitions in a Flink SQL Job Without Stopping It?
            3. How Do I Fix the DLI.0005 Error When Using EL Expressions to Create a Table in a Flink SQL Job?
            4. Why Is No Data Queried in the DLI Table Created Using the OBS File Path When Data Is Written to OBS by a Flink Job Output Stream?
            5. Why Does a Flink SQL Job Fails to Be Executed, and Is "connect to DIS failed java.lang.IllegalArgumentException: Access key cannot be null" Displayed in the Log?
            6. Data Writing Fails After a Flink SQL Job Consumed Kafka and Sank Data to the Elasticsearch Cluster
            7. How Does Flink Opensource SQL Parse Nested JSON?
            8. Why Is the Time Read by a Flink OpenSource SQL Job from the RDS Database Is Different from the RDS Database Time?
            9. Why Does Job Submission Fail When the failure-handler Parameter of the Elasticsearch Result Table for a Flink Opensource SQL Job Is Set to retry_rejected?
            10. How Do I Configure Connection Retries for Kafka Sink If it is Disconnected?
            11. How Do I Write Data to Different Elasticsearch Clusters in a Flink Job?
            12. Why Does DIS Stream Not Exist During Job Semantic Check?
            13. Why Is Error "Timeout expired while fetching topic metadata" Repeatedly Reported in Flink JobManager Logs?
          • Flink Jar Jobs
            1. Can I Upload Configuration Files for Flink Jar Jobs?
            2. Why Does a Flink Jar Package Conflict Result in Job Submission Failure?
            3. Why Does a Flink Jar Job Fail to Access GaussDB(DWS) and a Message Is Displayed Indicating Too Many Client Connections?
            4. Why Is Error Message "Authentication failed" Displayed During Flink Jar Job Running?
            5. Why Is Error Invalid OBS Bucket Name Reported After a Flink Job Submission Failed?
            6. Why Does the Flink Submission Fail Due to Hadoop JAR File Conflict?
            7. How Do I Locate a Flink Job Submission Error?
          • Flink Job Performance Tuning
            1. What Is the Recommended Configuration for a Flink Job?
            2. Flink Job Performance Tuning
            3. How Do I Prevent Data Loss After Flink Job Restart?
            4. How Do I Locate a Flink Job Running Error?
            5. How Can I Check if a Flink Job Can Be Restored From a Checkpoint After Restarting It?
            6. Why Are Logs Not Written to the OBS Bucket After a DLI Flink Job Fails to Be Submitted for Running?
            7. Why Is the Flink Job Abnormal Due to Heartbeat Timeout Between JobManager and TaskManager?
        • Spark Jobs
          • Spark Job Development
            1. Spark Jobs
            2. How Do I Use Spark to Write Data into a DLI Table?
            3. How Do I Set Up AK/SK So That a General Queue Can Access Tables Stored in OBS?
            4. How Do I View the Resource Usage of DLI Spark Jobs?
            5. How Do I Use Python Scripts to Access the MySQL Database If the pymysql Module Is Missing from the Spark Job Results Stored in MySQL?
            6. How Do I Run a Complex PySpark Program in DLI?
            7. How Do I Use JDBC to Set the spark.sql.shuffle.partitions Parameter to Improve the Task Concurrency?
            8. How Do I Read Uploaded Files for a Spark Jar Job?
            9. Why Are View Attributes Empty in Spark 3.3.1 Client?
            10. Why Can't I Find the Specified Python Environment After Adding the Python Package?
            11. Why Is a Spark Jar Job Stuck in the Submitting State?
          • Spark Job O&M
            1. What Can I Do When Receiving java.lang.AbstractMethodError in the Spark Job?
            2. Why Do I Get "ResponseCode: 403" and "ResponseStatus: Forbidden" Errors When a Spark Job Accesses OBS Data?
            3. Why Do I Encounter the Error "verifyBucketExists on XXXX: status [403]" When Using a Spark Job to Access an OBS Bucket That I Have Permission to Access?
            4. Why Does a Job Running Timeout Occur When Processing a Large Amount of Data with a Spark Job?
            5. Why Does a Spark Job Fail to Execute with an Abnormal Access Directory Error When Accessing Files in SFTP?
            6. Why Does the Job Fail to Be Executed Due to Insufficient Database and Table Permissions?
            7. Why Is the global_temp Database Missing in the Job Log of Spark 3.x?
            8. Why Does Using DataSource Syntax to Create an OBS Table of Avro Type Fail When Accessing Metadata With Spark 2.3.x?
            9. How Do I Resolve the "Input argument to rand must be a constant" Error in Spark 3.3.1 SQL Statements?
            10. When Using Spark 3.3.1 Client to Create a View and Perform a Join Query, an Error Occurs Stating "Not allowed to create a permanent view" When Attempting to Write the Data
        • DLI Resource Quotas
          1. What Is User Quota?
          2. How Do I View My Quotas?
          3. How Do I Apply for a Higher Quota?
        • DLI Permissions Management
          1. How Do I Do If I Receive an Error Message Stating That I Do Not Have Sufficient Permissions When Creating a Table After Upgrading the Engine Version of a Queue?
          2. What Is Column-Level Authorization for DLI Partitioned Tables?
          3. How Do I Do If I Encounter Insufficient Permissions While Updating Packages?
          4. Why Is Error "DLI.0003: Permission denied for resource..." Reported When I Run a SQL Statement?
          5. How Do I Do If I Can't Query Table Data After Being Granted Table Permissions?
          6. Will Granting Duplicate Permissions to a Table After Inheriting Database Permissions Cause an Error?
          7. Why Can't I Query a View After I'm Granted the Select Table Permission on the View?
          8. How Do I Do If I Receive a Message Saying I Don't Have Sufficient Permissions to Submit My Jobs to the Job Bucket?
          9. How Do I Resolve an Unauthorized OBS Bucket Error?
        • DLI APIs
          1. How Do I Obtain the AK/SK Pair?
          2. How Do I Obtain the Project ID?
          3. Why Is Error "unsupported media Type" Reported When I Subimt a SQL Job?
          4. What Can I Do If an Error Is Reported When the Execution of the API for Creating a SQL Job Times Out?
          5. How Can I Fix Garbled Chinese Characters Returned by an API?
      • Videos