Halaman ini belum tersedia dalam bahasa lokal Anda. Kami berusaha keras untuk menambahkan lebih banyak versi bahasa. Terima kasih atas dukungan Anda.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Exporting SQL Job Results

Updated on 2024-09-30 GMT+08:00

Store the data results of analyzed SQL jobs in a specified location in the desired format.

By default, DLI stores SQL job results in its job bucket. You can also download job results to a local host or export job results to a specified OBS bucket.

Exporting Job Results to the DLI Job Bucket

DLI specifies a default OBS bucket for storing job results. You can configure the bucket information on the Global Configuration > Project page of the DLI management console. Once a job is complete, the system automatically stores its results to this bucket.

The following conditions must be met if you want to read job results from the DLI job bucket:
  • You have configured the job bucket on the Global Configuration > Project page of the DLI management console by referring to Configuring a DLI Job Bucket.
  • You have submitted a service ticket to request the whitelisting of the feature that allows writing job results to buckets.
  • The user who executes jobs has been granted read and write permissions either on the job bucket or on the jobs/result path of the job bucket.

    For details, see Creating a Custom Bucket Policy.

For how to obtain job results from the DLI job bucket, see "Object Management" > Downloading Objects in Object Storage Service User Guide.

Exporting Job Results to a Specified Location in Another Bucket

In addition to storing job results in the default bucket, you can also export them to a specified location in another bucket, increasing the flexibility of job result management and making it easier to organize and manage them.

On the console, you can only view a maximum of 1,000 job results. To view additional results, you can export them to an OBS path. The procedure is as follows:

You can export job results on either the SQL Jobs or the SQL Editor page.

  • SQL Jobs page: In the navigation pane on the left, choose Job Management > SQL Jobs. On the displayed page, locate the row containing a desired job, click More in the Operation column, and select Export Result.
  • SQL Editor page: In the navigation pane on the left, choose SQL Editor. On the displayed page, once query statements are successfully executed, click next to the View Result tab to export job results.
NOTE:
  • If there are no numerical columns in the query results, job results cannot be exported.
  • Ensure that the user who exports job results has the read and write permissions on the OBS bucket.
Table 1 Parameters

Parameter

Mandatory

Description

Data Format

Yes

Choose a data format for the job results you want to export. The options include json and csv.

Queues

Yes

Select the queue where the job is executed. SQL jobs can only be executed on SQL queues.

Compression Format

No

Compression format of the data to be exported. The options are:

  • none
  • bzip2
  • deflate
  • gzip

Storage Path

Yes

Path in an OBS bucket where the job results are exported

  • If Export Mode is set to New OBS directory, then

    You need to manually enter a directory name and ensure that the directory name does not exist. Otherwise, the system returns an error message and the export operation cannot be performed.

    NOTE:

    The folder name cannot contain special characters (\/:*?"<>|) and cannot start or end with a period (.).

    For example, after selecting the storage path obs://bucket/src1/, you need to manually enter a directory name to change the path to obs://bucket/src1/src2/ and ensure that the src2 directory name does not exist under src1.

    The job result export path is obs://bucket/src1/src2/test.csv.

  • If Export Mode is set to Existing OBS directory (Overwritten), then

    After selecting a bucket path, the job results are exported to that path. If there are files with the same name, they will be automatically overwritten.

    For example, if you select obs://bucket/src1/ as the bucket path, then

    The job result export path is obs://bucket/src1/test.csv.

Export Mode

Yes

  • New OBS directory

    If you select this mode, a new folder path is created and the job results are saved to this path. This mode is used when you need to save exported results to a new location, making it easier to manage and track job results.

    If you select this option, you must manually enter an export directory in Storage Path and ensure that the directory must not exist. If the directory already exists, the system displays an error message and the export operation cannot be performed.

  • Existing OBS directory (Overwritten): When exporting job results, you can choose an existing file path as the output directory. If there is a file with the same name in that path, it will be automatically overwritten by the new exported job result file.

    This mode is used when you only need to save a single job result file in the same path, and you do not need to keep old job results.

Number of Results

No

Number of results to be exported

If you do not specify or set it to 0, all results will be exported.

Table Header

No

Whether the job results to be exported contain table headers

Exporting Job Results to a Local Host

You can download the results of asynchronous DDL and QUERY statements to a local directory. By default, you can download a maximum of 1,000 data records to a local host.

The procedure is as follows:

  1. Locate the row containing a desired job whose asynchronous DDL or QUERY statement has been successfully executed, click More in the Operation column, and select Submit Download Request. In the displayed dialog box, click OK. After a few seconds, the Submit Download Request button would change to Download.
    Figure 1 Selecting Submit Download Request
  2. Click Download to download the results to your local host.

Kami menggunakan cookie untuk meningkatkan kualitas situs kami dan pengalaman Anda. Dengan melanjutkan penelusuran di situs kami berarti Anda menerima kebijakan cookie kami. Cari tahu selengkapnya

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback