Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Using North-South MCS

Updated on 2024-12-18 GMT+08:00

Constraints

  • Currently, MCS can be used only when CCE Turbo clusters 1.21 or later and other Kubernetes clusters whose network type is underlay are created.
  • To ensure that the container networks of member clusters do not conflict and that the load balancer instance can connect to the pod IP address, you need to plan the network in advance. If the load balancer of MCS and the target cluster are in different VPCs, you need to enable the network between the VPCs in advance.

Preparations

  • If no load balancer is available, create one first. For details, see Creating a Dedicated Load Balancer. The load balancer to be created must:
    • Be a dedicated load balancer.
    • Support TCP/UDP networking.
    • Have a private IP address associated.
    • Support cross-VPC access if the load balancer and the member cluster are not in the same VPC.
  • MCS provides a unified entry and Layer-4 network access to cross-cluster backends. You need to deploy available workloads (Deployments) and Services in the federation in advance. If no workload or Service is available, create one by referring to Deployments and ClusterIP.
  • Set the cluster network type to underlay. For details about the cluster types that support the underlay network, see Configuring the Cluster Network.

Creating an MCS Object of the LoadBalancer Type

  1. Use kubectl to connect to the federation. For details, see Using kubectl to Connect to a Federation.
  2. Create and edit the mcs.yaml file. The file content is defined as follows. For details about the parameters, see Table 1.

    In the example, the defined MCS object is associated with Service nginx. Register this Service with the listener of Huawei Cloud ELB.

    vi mcs.yaml

    apiVersion: networking.karmada.io/v1alpha1
    kind: MultiClusterService
    metadata:
      name: nginx
      namespace: default
      annotations:
        karmada.io/elb.id: 2050857a-45ff-4312-8fdb-4a4e2052e7dc
        karmada.io/elb.projectid: c6629a1623df4596a4e05bb6f0a2e166
        karmada.io/elb.port: "802"
        karmada.io/elb.health-check-flag: "on"
    spec:
      ports:
        - port: 80
      types:
        - LoadBalancer
    Table 1 Key parameters

    Parameter

    Mandatory

    Type

    Description

    metadata.name

    Yes

    String

    Name of the MCS object, which must be the same as that of the associated Service.

    metadata.namespace

    No

    String

    Name of the namespace where the MCS object is located, which must be the same as that of the namespace where the associated Service is located. If this parameter is left blank, default is used.

    spec.types

    Yes

    String array

    Traffic direction.

    To enable service discovery between clusters, set this parameter to CrossCluster.

    If the Service needs to be exposed to external systems through ELB, set this parameter to LoadBalancer.

    spec.ports.port

    No

    Integer

    Service port that needs to be registered with the ELB listener.

    spec.consumerClusters.name

    No

    String

    Name of the cluster that accesses the Service. Set this parameter to the name of the cluster that is expected to access the Service across clusters through MCS. If this parameter is left blank, all clusters in the federation can access the Service by default.

    karmada.io/elb.id

    Yes

    String

    ID of the load balancer associated with MCS. This parameter cannot be left blank.

    The value ranges from 1 to 32.

    karmada.io/elb.projectid

    Yes

    String

    ID of the project of the load balancer associated with MCS. For details about how to obtain the project ID, see Obtaining a Project ID.

    The value ranges from 1 to 32.

    karmada.io/elb.port

    No

    String

    Port number of the load balancer associated with MCS. If this parameter is not specified, 80 is used by default.

    The value ranges from 1 to 65535.

    karmada.io/elb.health-check-flag

    No

    String

    Whether to enable health check. The options are as follows:

    • on: Enable
    • off: Disable

    If this parameter is not specified, off is used by default.

    karmada.io/elb.health-check-option

    No

    HealthCheck Object

    Health check parameters. For details, see HealthCheck.

    NOTE:
    • The following is an example of health check parameter settings:

    karmada.io/elb.health-check-option: '{"protocol":"TCP","delay":"5","connect_port":"80","timeout":"1","max_retries":"1","path":"/wd"}'

    • If health check is enabled for annotations, the Service name can contain a maximum of 39 characters.

    karmada.io/elb.lb-algorithm

    No

    String

    Forwarding algorithms. The options are as follows:

    • ROUND_ROBIN: weighted round robin
    • LEAST_CONNECTIONS: weighted least connections
    • SOURCE_IP: source IP hash

    The default value is ROUND_ROBIN.

    Table 2 HealthCheck parameters

    Parameter

    Mandatory

    Type

    Description

    protocol

    No

    String

    Protocol for health checks. The value can be TCP or HTTP. The default value is TCP.

    connect_port

    No

    Integer

    Port used for health checks. This parameter is optional and its value ranges from 1 to 65535.

    NOTE:

    By default, the service port on each backend server is used. You can also specify a port for health checks.

    delay

    No

    Integer

    The interval between the time when the application is delivered and the time when a health check is started, in seconds. The value ranges from 1 to 50. The default value is 5.

    timeout

    No

    Integer

    Health check timeout duration, in seconds. The value ranges from 1 to 50. The default value is 10.

    path

    No

    String

    Health check request URL. This parameter is valid only when protocol is set to HTTP.

    The value must start with a slash (/), and the default value is /. Only letters, digits, hyphens (-), slashes (/), periods (.), percent signs (%), question marks (?), number signs (#), ampersands (&), and extended character sets are allowed. The value contains 1 to 80 characters.

    max_retries

    No

    Integer

    Maximum number of retries. The value ranges from 1 to 10. The default value is 3.

  3. Create an MCS object.

    kubectl apply -f mcs.yaml

  4. Run the following commands to operate the MCS object (named nginx):

    • kubectl get mcs nginx: obtains the MCS object.
    • kubectl edit mcs nginx: updates the MCS object.
    • kubectl delete mcs nginx: deletes the MCS object.

Accessing Services Through MCS

After the MCS object is created, the listener and health check policy are automatically created by the ELB side. You can access the backend workload at http://{IP:port}. {IP:port} indicates the IP address and port number of the load balancer associated with the MCS object.

If the access is abnormal, run the kubectl describe mcs nginx command to query events and check whether MCS runs normally.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback