Halaman ini belum tersedia dalam bahasa lokal Anda. Kami berusaha keras untuk menambahkan lebih banyak versi bahasa. Terima kasih atas dukungan Anda.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Migrating Kafka Services

Updated on 2024-11-27 GMT+08:00

Overview

Kafka service migration refers to the process of migrating the production and consumption message clients connected to other Kafka services to ROMA Connect and migrating some persistent message files to ROMA Connect.

During service migration, services that have high requirements on continuity must be smoothly migrated to the cloud because they cannot afford a long downtime.

Preparations

  1. Ensure that the network connections between the message production and consumption clients and the MQS connection address of the ROMA Connect instance are normal. You can view the MQS connection address on the Instance Information page of the ROMA Connect console.
    • If a private IP address is used for the connection, the clients and the ROMA Connect instance must be in the same VPC. If the clients and the ROMA Connect instance are in different VPCs, you can create a VPC peering connection to enable communication between the two VPCs. For details, see VPC Peering Connection.
    • If a public network address is used for connection, the clients must have the permission to access the public network.
  2. Ensure that the MQS specifications of the ROMA Connect instance cannot be lower than the Kafka specifications used by the original service. For details about MQS specifications, see MQS Specifications.
  3. Create a topic with the same configurations as the original Kafka instance, including the topic name, number of replicas, number of partitions, message aging time, and whether to enable synchronous replication and flushing.

Migration Scheme 1: Migrating the Production First

  • Solution

    In this solution, first migrate the message production service to ROMA Connect so that the original Kafka does not generate new messages. After all messages in the original Kafka are consumed, migrate the message consumption service to ROMA Connect to consume new messages.

    This is a common migration solution in the industry because the operation procedure is simple. The migration process is controlled by the service side. During the entire process, messages are not out of order. However, latency may occur because there is a period when you have to wait for all data to be consumed.

    This scheme is applicable to services that require the message sequence but are insensitive to the end-to-end latency.

  • Migration Process
    1. Change the Kafka connection address of the production client to the MQS connection address of the ROMA Connect instance.
    2. Restart the production service so that the producer can send new messages to the new ROMA Connect instance.
    3. Check the consumption progress of each consumer group in the original Kafka instance until all data in the original Kafka instance is consumed.
    4. Change the Kafka connection address of the consumer client to the MQS connection address of the ROMA Connect instance.
    5. Restart the consumption service so that consumers can consume messages from the ROMA Connect instance.
    6. Check whether consumers consume messages properly from the ROMA Connect instance.
    7. The migration is completed.

Migration Scheme 2: Migrating the Production Later

  • Solution

    Use multiple consumers for the consumption service. Some consume messages from the original Kafka instance, and others consume messages from the ROMA Connect instance. Then, migrate the production service to the ROMA Connect instance so that all messages can be consumed in time.

    In this scheme, the consumption service may consume messages from the original Kafka and ROMA Connect at the same time in a period of time. Before the production service is migrated, the consumer service has been running on ROMA Connect. Therefore, there is no end-to-end latency problem. However, early on in the migration, data is consumed from both the original Kafka instance and ROMA Connect instance, so the messages may not be consumed in the order that they are produced.

    This scheme is suitable for services that require low latency but do not require strict message sequence.

  • Migration Process
    1. Start new consumer clients, set the Kafka connection addresses to that of the ROMA Connect instance, and consume data from the ROMA Connect instance.
      NOTE:

      Original consumer clients must continue running. Messages are consumed from both the original Kafka instance and ROMA Connect instance.

    2. Modify the production client and change the Kafka connection address to the MQS connection address of the ROMA Connect instance.
    3. Restart the producer client to migrate the production service to the ROMA Connect instance.
    4. After the production service is migrated, check whether the consumption service connected to the ROMA Connect instance is normal.
    5. After all data in the original Kafka is consumed, close the original consumption clients.
    6. The migration is completed.

Migrating Persistent Data

You can migrate consumed data from the original Kafka instance to the ROMA Connect instance by using the open-source tool MirrorMaker. This tool mirrors the original Kafka consumer and ROMA Connect producer and migrates data to the ROMA Connect instance.

If the topic of the original Kafka contains a single replica and the topic of the ROMA Connect instance contains three replicas, it is recommended that the storage space of the ROMA Connect instance be three times that of the original Kafka.

Kami menggunakan cookie untuk meningkatkan kualitas situs kami dan pengalaman Anda. Dengan melanjutkan penelusuran di situs kami berarti Anda menerima kebijakan cookie kami. Cari tahu selengkapnya

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback