El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Migrating ClickHouse Data

Updated on 2024-12-24 GMT+08:00

This section describes the data migration capability.

Application Scenarios

Once you have performed a node scale-out in ClickHouse, a data migration is necessary.

Precautions

  • Each data table is limited to a single task. A cluster can only execute one task at a time.
  • The local table's database must be either atomic (the default) or ordinary, and the table itself should be from the Mergetree family of engine tables, which includes both replicated and non-replicated types. Materialized view charts are not compatible.
  • The backup relationship for local tables mirrors that of clusters. In the context of shards, distributed tables are utilized.
  • By default, the original table becomes read-only during data migration.
  • Initially, data is moved to a temporary table. Subsequently, the original table is swapped with the table containing the migrated data. This transition, which may result in reading incorrect data, is completed within seconds.
  • Should cluster issues arise, the data migration process can be paused. After addressing the reported error in the cluster, the migration task can resume.
  • The source and redistribution nodes involved in data migration must share identical tables.
  • Data migration is not supported on a single node system.

Creating a Data Migration Task

  1. Log in to the CloudTable console.
  2. Click in the upper left corner to select a region.
  3. Click Cluster Management and click a cluster name to go to the cluster details page.
  4. In the navigation pane, choose Data Migration.

    Table 1 Data migration parameters

    Parameter

    Description

    Task ID/Name

    ID or name of the new migration task.

    Logical Cluster

    Name of the selected logical cluster.

    Source Nodes

    Node where data is stored.

    Distribution Nodes

    Node where data is distributed.

    Status/Progress

    Status/Progress of data distribution.

    The task can be in the initializing, running, or completed state.

    Created

    Task creation time.

    Start Time

    Task start time.

    Update Time

    Task modification time.

    Operation

    • Start: Start the task.
    • Edit: Edit task information.
    • Cancel: Cancel the task.
    • Details: View task details.
    • Delete: Delete the task.

  5. Click New Task in the upper left corner.

    1. Enter a task name (starting with a letter).
    2. Select a logical cluster.
    3. Select the migration percentage.
    4. Select the source node.
    5. Select a redistribution node.
    6. Select the data table to be migrated.

  6. Click OK to create the task.
  7. Click Start in the Operation column to start the created task.

Modifying a Data Migration Task

  1. Log in to the CloudTable console.
  2. Click in the upper left corner to select a region.
  3. Click Cluster Management and click a cluster name to go to the cluster details page.
  4. Choose Data Migration.
  5. Click Edit in the Operation column.
  6. After modifying the parameters, click OK.

Viewing Migration Task Details

  1. Log in to the CloudTable console.
  2. Click in the upper left corner to select a region.
  3. Click Cluster Management and click a cluster name to go to the cluster details page.
  4. Choose Data Migration.
  5. Click Details in the Operation column to access the task details page.
  6. View task information.

Deleting a Migration Task

  1. Log in to the CloudTable console.
  2. Click in the upper left corner to select a region.
  3. Click Cluster Management and click a cluster name to go to the cluster details page.
  4. Choose Data Migration.
  5. Click Delete in the Operation column. In the displayed dialog box, click OK to delete the task.

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback