Updated on 2024-05-10 GMT+08:00

Planning Resources for the Target Cluster

CCE allows you to customize cluster resources to meet various service requirements. Table 1 lists the key performance parameters of a cluster and provides the planned values. You can set the parameters based on your service requirements. It is recommended that the performance configuration be the same as that of the source cluster.

After a cluster is created, the resource parameters marked with asterisks (*) in Table 1 cannot be modified.

Table 1 CCE cluster planning

Resource

Key Performance Parameter

Description

Example Value

Cluster

*Cluster Type

  • CCE cluster: supports VM nodes. You can run your containers in a secure and stable container runtime environment based on a high-performance network model.
  • CCE Turbo cluster: runs on a cloud native infrastructure that features software-hardware synergy to support passthrough networking, high security and reliability, and intelligent scheduling, and BMS nodes.

CCE cluster

*Network Model

  • VPC network: The container network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network.
  • Tunnel network: The container network is an overlay tunnel network on top of a VPC network and uses the VXLAN technology. This network model is applicable when there is no high requirements on performance.
  • Cloud Native Network 2.0: The container network deeply integrates the elastic network interface (ENI) capability of VPC, uses the VPC CIDR block to allocate container addresses, and supports passthrough networking to containers through a load balancer.

VPC network

*Number of master nodes

  • 3: Three master nodes will be created to deliver better DR performance. If one master node is faulty, the cluster can still be available without affecting service functions.
  • 1: A single master node will be created. This mode is not recommended in commercial scenarios.

3

Node

OS

  • EulerOS
  • CentOS
  • Ubuntu

EulerOS

Node Specifications (vary depending on the actual region)

  • General-purpose: provides a balance of computing, memory, and network resources. It is a good choice for many applications. General-purpose nodes can be used for web servers, workload development, workload testing, and small-scale databases.
  • Memory-optimized: provides higher memory capacity than general-purpose nodes and is suitable for relational databases, NoSQL, and other workloads that are both memory-intensive and data-intensive.
  • General computing-basic: provides a balance of computing, memory, and network resources and uses the vCPU credit mechanism to ensure baseline computing performance. Nodes of this type are suitable for applications requiring burstable high performance, such as light-load web servers, enterprise R&D and testing environments, and low- and medium-performance databases.
  • GPU-accelerated: provides powerful floating-point computing and is suitable for real-time, highly concurrent massive computing. Graphical processing units (GPUs) of P series are suitable for deep learning, scientific computing, and CAE. GPUs of G series are suitable for 3D animation rendering and CAD. GPU-accelerated nodes can be added only to clusters of v1.11 or later.
  • High-performance computing: provides stable and ultra-high computing performance and is suitable for scientific computing and workloads that demand ultra-high computing power and throughput.
  • General computing-plus: provides stable performance and exclusive resources to enterprise-class workloads with high and stable computing performance.
  • Disk-intensive: supports local disk storage and provides high networking performance. It is designed for workloads requiring high throughput and data switching, such as big data workloads.
  • Ultra-high I/O: delivers ultra-low SSD access latency and ultra-high IOPS performance. This type of specifications is ideal for high-performance relational databases, NoSQL databases (such as Cassandra and MongoDB), and Elasticsearch.
  • Ascend-accelerated: Ascend-accelerated nodes powered by HiSilicon Ascend 310 AI processors are applicable to scenarios such as image recognition, video processing, inference computing, and machine learning.

General-purpose (node specifications: 4 vCPUs and 8 GiB memory)

System Disk

  • High I/O: The backend storage media is SAS disks.
  • Ultra-high I/O: The backend storage media is SSD disks.

High I/O

Storage Type

  • EVS volumes: Mount an EVS volume to a container path. When containers are migrated, the attached EVS volumes are migrated accordingly. This storage mode is suitable for data that needs to be permanently stored.
  • SFS volumes: Create SFS volumes and mount them to a container path. The file system volumes created by the underlying SFS service can also be used. SFS volumes are applicable to persistent storage for frequent read/write in multiple workload scenarios, including media processing, content management, big data analysis, and workload analysis.
  • OBS volumes: Create OBS volumes and mount them to a container path. OBS volumes are applicable to scenarios such as cloud workload, data analysis, content analysis, and hotspot objects.
  • SFS Turbo volumes: Create SFS Turbo volumes and mount them to a container path. SFS Turbo volumes are fast, on-demand, and scalable, which makes them suitable for DevOps, containerized microservices, and enterprise office applications.

EVS volumes