Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Ubiquitous Cloud Native Service/ Best Practices/ Cluster Federation/ Using MCI to Distribute Traffic Across Clusters

Using MCI to Distribute Traffic Across Clusters

Updated on 2024-11-01 GMT+08:00

Application Scenarios

Distributed clusters are often deployed on the clouds or regions nearest to users for low latency. However, if a cluster in a region is faulty, services in that region will be affected. MCI can be used to distribute traffic across clusters in different regions for cross-region failovers.

Preparations

  • Prepare two CCE Turbo clusters of v1.21 or later or Kubernetes clusters whose network model is underlay and deploy them in different regions.
  • Plan the regions where applications are to be deployed and purchase a load balancer for each region. To ensure cross-region DR, the load balancers must be deployed across regions. Each load balancer must be a dedicated one of the application type (HTTP/HTTPS) and support the private network (with a private IP address), with the cross-VPC backend function enabled. For details, see Creating a Dedicated Load Balancer.
  • Connect ELB VPCs to Kubernetes clusters so that load balancers can communicate with pods and the CIDR blocks of member clusters do not conflict with each other.
  • Prepare Deployments and Services available in the federation. If no Deployment or Service is available, create ones by referring to Deployments and ClusterIP.

Cross-Region Failover Through MCI

This section uses CCE Turbo clusters cce-cluster01 and cce-cluster02 as an example to describe how to enable public network access to services across regions and verify the cross-region DR of applications. This will be achieved by MCI objects that are associated with load balancers in different regions and DNS resolution provided by Huawei Cloud.

  1. Register clusters with UCS, connect them to the network, and add them to a fleet. For details, see Registering a Cluster.
  2. Enable cluster federation for the fleet and ensure that the clusters have been connected to a federation. For details, see Cluster Federation.
  3. Create workloads and configure Services.

    The following uses the nginx image as an example to describe how to deploy nginx workloads in clusters cce-cluster01 and cce-cluster-02 and configure Services.

  4. Create a load balancer in each region.

    In the network configuration, enable the IP backend (cross-VPC backend) function, select the VPC where cce-cluster01 resides, and create an EIP. Record the ID of each load balancer.

  5. Obtain the project ID of each region.

    On the Huawei Cloud console, choose the account name in the upper right corner and click My Credentials to query the project ID of each region.

  6. Use kubectl to connect to the federation. For details, see Using kubectl to Connect to a Federation.
  7. Create and edit the mci.yaml file of each region.

    Create MCI objects. The file content is defined as follows. For details about the parameters, see Using MCI.

    kubectl apply -f mci.yaml

    apiVersion: networking.karmada.io/v1alpha1 
    kind: MultiClusterIngress
    metadata:
      name: nginx-ingress-region1
      namespace: default
      annotations:
        karmada.io/elb.id: xxxxxxx # ID of the load balancer in region 1
        karmada.io/elb.port: " 80" # Listener port of the load balancer in region 1
        karmada.io/elb.projectid: xxxxxxx # Project ID of the tenant in region 1
        karmada.io/elb.health-check-flag: " on" # Health check is enabled for traffic switchover.
    spec:
      ingressClassName: public-elb
      rules:
      - host: demo.localdev.me
        http:
          paths:
          - backend:
              service:
                name: nginx
                port:
                  number: 8080
            path: /
            pathType: Prefix
    ---
    apiVersion: networking.karmada.io/v1alpha1 
    kind: MultiClusterIngress
    metadata:
      name: nginx-ingress-region2
      namespace: default
      annotations:
        karmada.io/elb.id: xxxxxxx # ID of the load balancer in region 2
        karmada.io/elb.port: " 801" # Listener port of the load balancer in region 2
        karmada.io/elb.projectid: xxxxxxx # Project ID of the tenant in region 2
        karmada.io/elb.health-check-flag: " on" # Health check is enabled for traffic switchover.
    spec:
      ingressClassName: public-elb
      rules:
      - host: demo.localdev.me
        http:
          paths:
          - backend:
              service:
                name: nginx
                port:
                  number: 8080
            path: /
            pathType: Prefix

  8. Check whether the backend server group is attached to the ELB listener, whether the backend instance is running, and whether the health check is normal.

    CAUTION:

    Enable the security group for containers in advance. Take a CCE Turbo cluster as an example. Choose Overview > Network Configuration > Default Security Group and enable the CIDR block of the load balancer in the other region.

Configuring DNS Access

This section uses the private DNS server on Huawei Cloud as an example. You can also configure the DNS server by yourself.

  1. Create a private DNS server and access the corresponding service over the public network on the ECS console. Associate an EIP or NAT gateway with the ECS instance to allow this ECS to access the public network.

    • Create a private domain name in the same VPC as the ECS. The domain name is specified in the MCI object.
    • Add the EIP of each load balancer to the record set of each cluster.

  2. On the ECS console, use curl demo.localdev.me to access the corresponding service. If 200 is returned, the service access is normal.

Verifying the Cross-Region Failover

The example applications are deployed in clusters ccecluster-01 and ccecluster-02 and EIPs are provided for accessing corresponding services.

Fault simulation

The following uses the fault in region 1 as an example. Perform the following operations to simulate a single-region fault:

  1. Hibernate the cce-cluster01 cluster in region 1 and stop the nodes in the cluster.
  2. Disassociate EIP 1 from the load balancer in region 1.

DR verification

  1. On the DNS resolution page, manually delete the IP address associated with the load balancer in region 1 from the record set.
  2. Check whether backend servers whose health check results are abnormal are displayed.
  3. Access the corresponding service on the ECS console and check whether the service can be accessed and whether 200 is returned.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback