หน้านี้ยังไม่พร้อมใช้งานในภาษาท้องถิ่นของคุณ เรากำลังพยายามอย่างหนักเพื่อเพิ่มเวอร์ชันภาษาอื่น ๆ เพิ่มเติม ขอบคุณสำหรับการสนับสนุนเสมอมา

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Migrating Full Data

Updated on 2024-12-20 GMT+08:00

Migrate all data from source databases to Huawei Cloud DLI.

Prerequisites

Procedure

  1. Sign in to the MgC console.
  2. In the navigation pane on the left, choose Migrate > Big Data Migration. In the upper left corner of the page, select the migration project created in Preparations.
  3. In the upper right corner of the page, click Create Migration Task.

  4. Select MaxCompute for Source Component, Data Lake Insight (DLI) for Target Component, Full data migration for Task Type, and click Next.

  5. Configure parameters required for creating a full data migration task based on Table 1.

    Table 1 Parameters required for creating a full data migration task

    Area

    Parameter

    Configuration

    Basic Settings

    Task Name

    The default name is Full-data-migration-from-MaxCompute-to-DLI-4 random characters (including letters and numbers). You can also customize a name.

    Edge Device

    Select the Edge device you connected to MgC in Making Preparations.

    Source Settings

    Source Connection

    Select the source connection you created.

    Estimated Project Period (Day) (Optional)

    If this parameter is set, the system checks table lifecycle during the migration. If the lifecycle of a table ends before the expected end time of the project, the table will be skipped. If this parameter is not set, all tables are migrated by default.

    MaxCompute Parameters (Optional)

    The parameters are optional and usually left blank. If needed, you can configure the parameters by referring to MaxCompute Documentation.

    Data Scope

    By database

    Enter the names of databases to be migrated in the Include Databases text box. Click Add to add more entries. A maximum of 10 databases can be added.

    If there are tables you do not want to migrate, download the template in CSV format, add information about these tables to the template, and upload the template to MgC. For details, see steps 2 to 5.

    By table

    1. Download the template in CSV format.
    2. Open the downloaded CSV template file with Notepad.
      CAUTION:

      Do not use Excel to edit the CSV template file. The template file edited and saved in Excel cannot be identified by MgC.

    3. Retain the first line in the CSV template file. From the second line onwards, enter the information about tables to be migrated in the format of {MaxComute project name},{Table name}. MaxComute project name refers to the name of the MaxCompute project to be migrated. Table name refers to the data table to be migrated.
      NOTICE:
      • Use commas (,) to separate the MaxCompute project name and the table name in each line. Do not use spaces or other separators.
      • After adding the information about a table, press Enter to start a new line.
    4. After all table information is added, save the changes to the CSV file.
    5. Upload the edited and saved CSV file to MgC.

    Target Settings

    Target Connection

    Select the DLI connection with a general queue created in Creating a Target Connection.

    CAUTION:

    Do not a DLI connection with a SQL queue configured.

    Custom Parameters (Optional)

    Configure the parameters as needed. For details, see Configuration parameter description and Custom Parameters.

    • If the migration is performed over the Internet, set the following four parameters:

    • If the migration is performed over a private network, set the following four parameters:

      • spark.dli.metaAccess.enable: Enter true.
      • spark.dli.job.agency.name: Enter the name of the DLI agency you configured.
      • mgc.mc2dli.data.migration.dli.file.path: Enter the OBS path for storing the migration-dli-spark-1.0.0.jar package. For example, obs://mgc-test/data/migration-dli-spark-1.0.0.jar
      • mgc.mc2dli.data.migration.dli.spark.jars: Enter the OBS path for storing the fastjson-1.2.54.jar and datasource.jar packages. The value is transferred in array format. Package names must be enclosed using double quotation marks and be separated with commas (,) For example: ["obs://mgc-test/data/datasource.jar","obs://mgc-test/data/fastjson-1.2.54.jar"]
      • spark.sql.catalog.mc_catalog.tableWriteProvider: Enter tunnel.
      • spark.sql.catalog.mc_catalog.tableReadProvider: Enter tunnel.
      • spark.hadoop.odps.end.point: Enter the VPC endpoint of the region where the source MaxCompute service is provisioned. For details about the MaxCompute VPC endpoint in each region, see Endpoints in different regions (VPC). For example, if the source MaxCompute service is located in Hong Kong, China, enter http://service.cn-hongkong.maxcompute.aliyun-inc.com/api.
      • spark.hadoop.odps.tunnel.end.point: Enter the VPC Tunnel endpoint of the region where the source MaxCompute service is located. For details about the MaxCompute VPC Tunnel endpoint in each region, see Endpoints in different regions (VPC). For example, if the source MaxCompute service is located in Hong Kong, China, enter http://dt.cn-hongkong.maxcompute.aliyun-inc.com.

    Migration Settings

    Large Table Migration Rules

    Control how large a table will be split into multiple migration subtasks. You are advised to retain the default settings. You can also change the settings as needed.

    Small Table Migration Rules

    Control how small a table will be merged into one migration subtask along with other small tables. This can accelerate your migration. You are advised to retain the default settings. You can also change the settings as needed.

    Concurrency

    Set the number of concurrent migration subtasks. The default value is 3. The value ranges from 1 to 10.

    Max. SQL Statements Per File

    SQL statements are generated for running migration commands. The number you set here limits how many SQL statements can be stored in a single file. The default value is 3. The value ranges from 1 to 50.

  6. After the configuration is complete, execute the task.

    NOTICE:
    • A migration task can be executed repeatedly. Each time a migration task is executed, a task execution is generated.
    • You can click the task name to modify the task configuration.
    • You can select Run immediately and click Save to create the task and execute it immediately. You can view the created task on the Tasks page.

    • You can also click Save to just create the task. You can view the created task on the Tasks page. To execute the task, click Execute in the Operation column.

  7. After the migration task is executed, click View Executions in the Operation column. On the Task Executions tab, you can view the details of the running task execution and all historical executions.

    Click Execute Again in the Status column to run the execution again.

    Click View in the Progress column. On the displayed Progress Details page, view and export the task execution results.

  8. (Optional) After the data migration is complete, verify data consistency between the source and the target databases. For details, see Verifying the Consistency of Data Migrated from MaxCompute to DLI.

เราใช้คุกกี้เพื่อปรับปรุงไซต์และประสบการณ์การใช้ของคุณ การเรียกดูเว็บไซต์ของเราต่อแสดงว่าคุณยอมรับนโยบายคุกกี้ของเรา เรียนรู้เพิ่มเติม

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback