Help Center> Cloud Container Engine> FAQ> Monitoring and Logging FAQs> Explanation of Monitoring Metrics on the CCE Console

Explanation of Monitoring Metrics on the CCE Console

The CCE console monitors resource objects such as clusters, nodes, workloads, and pods. This section explains the monitoring metrics you may see on the CCE console.

Cluster/Master Node Monitoring Metrics

In the navigation tree on the left of the CCE console, choose Resource Management > Clusters. In the list on the right, click a cluster name to go to the cluster details page, as shown in Figure 1.
Figure 1 Cluster details - monitoring
Figure 2 Cluster monitoring

Explanation of monitoring metrics:

  • CPU allocation rate = Sum of CPU quotas requested by pods in the cluster/Sum of CPU quotas that can be allocated of all nodes (excluding master nodes) in the cluster
  • Memory allocation rate = Sum of memory quotas requested by pods in the cluster/Sum of memory quotas that can be allocated of all nodes (excluding master nodes) in the cluster
  • CPU usage: Average CPU usage of all nodes (excluding master nodes) in a cluster
  • Memory usage: Average memory usage of all nodes (excluding master nodes) in a cluster

Allocatable node resources (CPU or memory) = Total amount – Reserved amount – Eviction thresholds (For details, see Formula for Calculating the Reserved Resources of a Node.)

Worker Node Monitoring Metrics

When creating a container, you need to specify the resource quotas, that is, the requested CPU and memory amount. The node will allocate resources to the container based on the requests. The ratio of requested resources to all resources is the allocation rate. This rate only indicates resource pre-allocation, but not the actual resource usage.

In the navigation tree on the left of the CCE console, choose Resource Management > Nodes. View the available resources in the list on the right, as shown in Figure 3.
Figure 3 Node management - allocatable resources

Explanation of monitoring metrics:

Allocatable resources indicate the upper limit of resources that can be requested by pods on a node, and are calculated based on the requests. Allocatable resources do not indicate the actual available resources of the node.

The calculation formula is as follows:

  • Allocatable CPUs = Total CPUs – Requested CPUs of all pods – Reserved CPUs for other resources
  • Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources
  • Remaining CPUs = Total CPUs – Actually used CPUs
  • Remaining memory = All memory – All used memory

Workload Monitoring Metrics

In the navigation tree on the left of the CCE console, choose Workloads > Deployments or Workloads > StatefulSets. Click the workload name on the card view to go to the workload details page, as shown in Figure 4.

Figure 4 Clicking the workload name
Figure 5 Workload resources

Explanation of monitoring metrics:

  • CPU Request (cores): minimum number of CPUs required by the container
  • Memory Request (GiB): minimum size of memory required by the container
  • CPU Limit (cores): maximum number of CPU cores available for the container
  • Memory Limit (GiB): maximum memory that can be used by the container. If the memory usage exceeds the value of this parameter, the container is terminated.

If you set CPU and memory limits for a workload, the corresponding resources will be reserved for the workload and cannot be used by other workloads.

You can check allocatable resources of a node on the node management page.

Example: In a CCE cluster with total 5 cores of allocatable CPUs and 10 GiB of allocatable memory, create one Nginx application with 10 pods, and each pod requests 0.5-core CPU and 1 GiB memory. In this case, all the pods can run properly. If one more pod is added, it will fail to be scheduled due to insufficient resources.