Help Center/ Ubiquitous Cloud Native Service/ Best Practices/ Cluster Federation/ Using MCI to Distribute Traffic Across Clusters
Updated on 2024-11-01 GMT+08:00

Using MCI to Distribute Traffic Across Clusters

Application Scenarios

Distributed clusters are often deployed on the clouds or regions nearest to users for low latency. However, if a cluster in a region is faulty, services in that region will be affected. MCI can be used to distribute traffic across clusters in different regions for cross-region failovers.

Preparations

  • Prepare two CCE Turbo clusters of v1.21 or later or Kubernetes clusters whose network model is underlay and deploy them in different regions.
  • Plan the regions where applications are to be deployed and purchase a load balancer for each region. To ensure cross-region DR, the load balancers must be deployed across regions. Each load balancer must be a dedicated one of the application type (HTTP/HTTPS) and support the private network (with a private IP address), with the cross-VPC backend function enabled. For details, see Creating a Dedicated Load Balancer.
  • Connect ELB VPCs to Kubernetes clusters so that load balancers can communicate with pods and the CIDR blocks of member clusters do not conflict with each other.
  • Prepare Deployments and Services available in the federation. If no Deployment or Service is available, create ones by referring to Deployments and ClusterIP.

Cross-Region Failover Through MCI

This section uses CCE Turbo clusters cce-cluster01 and cce-cluster02 as an example to describe how to enable public network access to services across regions and verify the cross-region DR of applications. This will be achieved by MCI objects that are associated with load balancers in different regions and DNS resolution provided by Huawei Cloud.

  1. Register clusters with UCS, connect them to the network, and add them to a fleet. For details, see Registering a Cluster.
  2. Enable cluster federation for the fleet and ensure that the clusters have been connected to a federation. For details, see Cluster Federation.
  3. Create workloads and configure Services.

    The following uses the nginx image as an example to describe how to deploy nginx workloads in clusters cce-cluster01 and cce-cluster-02 and configure Services.

  4. Create a load balancer in each region.

    In the network configuration, enable the IP backend (cross-VPC backend) function, select the VPC where cce-cluster01 resides, and create an EIP. Record the ID of each load balancer.

  5. Obtain the project ID of each region.

    On the Huawei Cloud console, choose the account name in the upper right corner and click My Credentials to query the project ID of each region.

  6. Use kubectl to connect to the federation. For details, see Using kubectl to Connect to a Federation.
  7. Create and edit the mci.yaml file of each region.

    Create MCI objects. The file content is defined as follows. For details about the parameters, see Using MCI.

    kubectl apply -f mci.yaml

    apiVersion: networking.karmada.io/v1alpha1 
    kind: MultiClusterIngress
    metadata:
      name: nginx-ingress-region1
      namespace: default
      annotations:
        karmada.io/elb.id: xxxxxxx # ID of the load balancer in region 1
        karmada.io/elb.port: " 80" # Listener port of the load balancer in region 1
        karmada.io/elb.projectid: xxxxxxx # Project ID of the tenant in region 1
        karmada.io/elb.health-check-flag: " on" # Health check is enabled for traffic switchover.
    spec:
      ingressClassName: public-elb
      rules:
      - host: demo.localdev.me
        http:
          paths:
          - backend:
              service:
                name: nginx
                port:
                  number: 8080
            path: /
            pathType: Prefix
    ---
    apiVersion: networking.karmada.io/v1alpha1 
    kind: MultiClusterIngress
    metadata:
      name: nginx-ingress-region2
      namespace: default
      annotations:
        karmada.io/elb.id: xxxxxxx # ID of the load balancer in region 2
        karmada.io/elb.port: " 801" # Listener port of the load balancer in region 2
        karmada.io/elb.projectid: xxxxxxx # Project ID of the tenant in region 2
        karmada.io/elb.health-check-flag: " on" # Health check is enabled for traffic switchover.
    spec:
      ingressClassName: public-elb
      rules:
      - host: demo.localdev.me
        http:
          paths:
          - backend:
              service:
                name: nginx
                port:
                  number: 8080
            path: /
            pathType: Prefix

  8. Check whether the backend server group is attached to the ELB listener, whether the backend instance is running, and whether the health check is normal.

    Enable the security group for containers in advance. Take a CCE Turbo cluster as an example. Choose Overview > Network Configuration > Default Security Group and enable the CIDR block of the load balancer in the other region.

Configuring DNS Access

This section uses the private DNS server on Huawei Cloud as an example. You can also configure the DNS server by yourself.

  1. Create a private DNS server and access the corresponding service over the public network on the ECS console. Associate an EIP or NAT gateway with the ECS instance to allow this ECS to access the public network.

    • Create a private domain name in the same VPC as the ECS. The domain name is specified in the MCI object.
    • Add the EIP of each load balancer to the record set of each cluster.

  2. On the ECS console, use curl demo.localdev.me to access the corresponding service. If 200 is returned, the service access is normal.

Verifying the Cross-Region Failover

The example applications are deployed in clusters ccecluster-01 and ccecluster-02 and EIPs are provided for accessing corresponding services.

Fault simulation

The following uses the fault in region 1 as an example. Perform the following operations to simulate a single-region fault:

  1. Hibernate the cce-cluster01 cluster in region 1 and stop the nodes in the cluster.
  2. Disassociate EIP 1 from the load balancer in region 1.

DR verification

  1. On the DNS resolution page, manually delete the IP address associated with the load balancer in region 1 from the record set.
  2. Check whether backend servers whose health check results are abnormal are displayed.
  3. Access the corresponding service on the ECS console and check whether the service can be accessed and whether 200 is returned.