Bu sayfa henüz yerel dilinizde mevcut değildir. Daha fazla dil seçeneği eklemek için yoğun bir şekilde çalışıyoruz. Desteğiniz için teşekkür ederiz.
- What's New
- Service Overview
- Getting Started
-
User Guide
- IAM Permissions Management
- Getting Started
- Managing DIS Streams
-
Using DIS
- Checking and Configuring DNS Information
- Uploading Data by Using Agent
- Using DIS Flume Plugin to Upload and Download Data
- Using a DIS Logstash Plugin to Upload and Download Data
- Using Kafka Adapter to Upload and Download Data
- Using SparkStreaming SDK to Download Data
- Using a DIS Flink Connector to Upload and Download Data
- Managing a Dump Task
- Managing Enterprise Projects
- Notifying Events
- Monitoring
- Best Practices
-
SDK Reference
- Overview
- Related Resources
- Enabling DIS
- Creating a DIS Stream
- Obtaining Authentication Information
-
Getting Started with SDK
-
Using the Java SDK
- Preparing the Environment
- Configuring a Sample Project
- Initializing a DIS SDK Client Instance
- Creating a Stream
- Creating a Dump Task
- Updating a Dump Task
- Deleting a Dump Task
- Querying a Dump Task List
- Querying Dump Details
- Deleting a Stream
- Querying a Stream List
- Querying Stream Details
- Downloading Streaming Data
- Uploading Streaming Data
- Obtaining the Data Cursor
- Creating an Application
- Deleting an Application
- Adding a Checkpoint
- Querying a Checkpoint
- Changing Partition Quantity
- Using Kafka Adapter to Upload and Download Data
-
Using the Python SDK
- Preparing the Installation Environment
- Configuring a Sample Project
- Initializing a DIS SDK Client Instance
- Creating a Stream
- Creating a Dump Task
- Deleting a Stream
- Deleting a Dump Task
- Querying a Stream List
- Querying a Dump Task List
- Querying Stream Details
- Querying Dump Details
- Uploading Streaming Data in JSON Format
- Uploading Streaming Data in Protobuf Format
- Downloading Streaming Data
- Creating an Application
- Deleting an Application
- Viewing Application Details
- Querying an Application List
- Adding a Checkpoint
- Querying a Checkpoint
- Changing Partition Quantity
- Obtaining a Data Cursor
-
Using the Java SDK
- Error Codes
- Change History
- API Reference
-
FAQs
-
General Questions
- What Is DIS?
- What Is a Partition?
- What Can I Do with DIS?
- What Advantages Does DIS Have?
- Which Modules Do DIS Have?
- How Do I Create a DIS Stream?
- What Is the Difference Between Storing Data into DIS and Dumping Data Elsewhere?
- How Do I Check Software Package Integrity?
- How Do I Send and Retrieve Data Using DIS?
- What Is Data Control?
- Dump Questions
- DIS Agent Questions
-
General Questions
- General Reference
Copied.
Dumping Data to MRS
Prerequisites
DIS cannot dump data to MRS 3.x or later versions. Kerberos authentication must be disabled for the MRS cluster to which data is to be dumped.
Source Data Type: JSON, BLOB, and CSV; Dump File Format: Text
Parameter |
Description |
Value |
---|---|---|
Task Name |
Name of the dump task. The names of dump tasks created for the same stream must be unique. A dump task name is 1 to 64 characters long. Only letters, digits, hyphens (-), and underscores (_) are allowed. |
- |
MRS Cluster |
Click Select. In the Select MRS Cluster dialog box, select an MRS cluster. Data is dumped only to an MRS cluster that is not authenticated by Kerberos. You can only select but not enter a value in this field. |
- |
HDFS Path |
Click Select. In the Select HDFS Path dialog box, select an HDFS path. You can only select but not enter a value in this field. |
This parameter is available only after you select an MRS cluster. |
File Directory |
Directory created in MRS to store files from the DIS stream. This directory name is 0 to 50 characters long. By default, this parameter is left unspecified. |
- |
Offset |
|
Latest |
Dump Interval (s) |
Interval at which data from the DIS stream will be imported into dump destination, such as OBS, MRS, DLI, and DWS. If no data was pushed to the DIS stream during the time specified here, the dump file will not be generated. Value range: 30s to 900s Unit: second Default value: 300s |
- |
Temporary Bucket |
OBS bucket in which a directory is created for temporarily storing user data. The data in the directory is deleted after being dumped to a specific destination. |
- |
Temporary Directory |
Directory in the chosen Temporary Bucket for temporarily storing data. The data in the directory is deleted after being dumped to a specific destination. If this field is left blank, the data is stored directly to the Temporary Bucket. |
- |
Source Data Type: JSON and CSV; Dump File Format: Parquet
Parameter |
Description |
Value |
---|---|---|
Source Data Schema |
JSON or CSV data example, used to describe the JSON or CSV data format. DIS can generate an Avro schema based on the JSON or CSV data sample and convert the uploaded JSON or CSV data to the Parquet format. |
- |
Source Data Type: JSON and CSV; Dump File Format: CarbonData
Parameter |
Description |
Value |
---|---|---|
Source Data Schema |
JSON or CSV data example, used to describe the JSON or CSV data format. DIS can generate an Avro schema based on the JSON or CSV data sample and convert the uploaded JSON or CSV data to the CarbonData format. |
- |
CarbonData Retrieval Attribute |
Attribute of the carbon table, used to create a carbon writer. The following keys are supported:
|
- |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot