Updated on 2022-02-22 GMT+08:00

Application Scenarios

Traffic Sharing

In industries such as e-commerce, video on-live, and education, traffic surges in a specific period due to scenarios such as flash sales, live interaction, score query, or registration. To cope with the impact of traffic spikes and improve service stability, MCP provides cross-cloud auto scaling based on traffic policies. With cross-cloud and cross-cluster scaling for application instances and unified management of application traffic, MCP implements elastic load balancing of traffic across different clouds. You can configure a traffic policy when deploying an application across clouds. If the private cloud clusters or clusters of a public cloud are overloaded due to traffic spikes, MCP flexibly schedules services to other public cloud clusters based on the traffic policy. This policy can prevent the system from breaking down due to traffic bursts.

  • Advantages
    • Scalability: Higher horizontal scalability provides more capacity than a single cluster.
    • Intelligent routing: Clusters are distributed in different regions, and requests are forwarded from a geographically closer cloud, which in turn reduces latency.
    • Auto scaling: Application instances are automatically scheduled to different clouds based on the configured scaling policies.
  • Recommended Services for Use

    Cloud Container Engine (CCE), Software Repository for Container (SWR), and Elastic Load Balance (ELB)

Figure 1 How traffic sharing works

Cross-Cloud DR

To cope with failures of a single cloud, MCP allows instances of an application to run on multiple clouds. When one of the clouds is down, MCP will migrate instances and switch over traffic to the other clouds within seconds. In this way, service reliability is greatly improved. In addition, compared with traditional multi & hybrid cloud solutions, containers can be scaled within seconds, eliminating the need to maintain redundant resources for DR and reducing the construction and O&M costs of infrastructure resources.

  • Advantages
    • Unified multi-cluster management to automatically monitor the health status of each cluster.
    • Unified application management, auto scaling, and automatic migration of application instances to other clouds during DR.
    • Unified traffic management and automatic traffic switchover.
  • Recommended Services for Use

    CCE, SWR, ELB, and Domain Name Service (DNS)

Figure 2 How cross-cloud DR works

Decoupling of Data Storage and Service Processing

For security purpose, users in the finance and security industries require that core services run on their private cloud clusters. With HUAWEI CLOUD MCP, you can store sensitive data in the private cloud clusters and run common services on the public cloud to ensure data security.

  • Advantages
    • Sensitive data is completely independent and controllable, eliminating information security risks.
    • Unified management of service resources reduces O&M workload.
    • Networks are connected through Direct Connect (DC), which features high performance and reliability.
  • Recommended Services for Use

    CCE, SWR, ELB, and DC

Figure 3 Decoupling of data storage and service processing

Decoupling of Development and Production Environments

For network security in continuous integration (CI) or continuous delivery (CD) scenarios, some users want to deploy development and test environments in the private cloud clusters and the production environment in the public cloud clusters. With HUAWEI CLOUD MCP, you can manage clusters where the development, test, and production environments run in a unified manner. MCP works with ContainerOps to implement service rollout by pipeline, improving the code delivery and deployment efficiency.

  • Advantages
    • Multiple environments are decoupled.
    • The consistent running environment eliminates environment dependency for service rollout.
    • ContainerOps implements process automation from code to service rollout.
  • Recommended Services for Use

    CCE and SWR

Figure 4 Decoupling of development and production environments

Flexible Allocation of Computing Resources

Computing tasks in industries such as AI, genomic sequencing, and video processing rely on GPUs, bare metal servers, and other special hardware. With MCP, you can deploy compute-intensive applications in the cloud and common applications in the private cloud or other clouds, avoiding high costs caused by large-scale use of special hardware.

  • Advantages
    • Hardware leasing greatly reduces the O&M costs of physical resources.
    • Rapid scaling reduces idle resources and supports on-demand procurement.
    • Huawei-developed elastic bare metal servers are supported to deliver optimal performance for customers.
  • Recommended Services for Use

    CCE, Bare Metal Server (BMS), and Object Storage Service (OBS)

Figure 5 Flexible allocation of computing resources