Monitoring Container Network Metrics of CCE Turbo Clusters
CCE Network Metrics Exporter is an add-on for monitoring and managing container network traffic. It collects traffic statistics of containers that do not use the host network in CCE Turbo clusters and performs node-wide container connectivity checks. The monitoring data has been adapted to Prometheus. You can call the Prometheus API to view monitoring data.
The following describes how to view the container network metrics of a CCE Turbo cluster using Prometheus.
Prerequisites
- A CCE Turbo cluster has been created.
- The cluster has required node resources (at least 4 vCPUs and 8 GiB of memory) for installing the Cloud Native Cluster Monitoring (Cloud Native Cluster Monitoring) and CCE Network Metrics Exporter (CCE Network Metrics Exporter) add-ons.
- You can access the cluster using kubectl. For details, see Connecting to a Cluster Using kubectl.
Installing the Add-ons
- Log in to the CCE console and click the CCE Turbo cluster name to access the cluster console. In the navigation pane, choose Add-ons.
- Locate the Cloud Native Cluster Monitoring add-on and click Install.
When you use this add-on to monitor container network metrics in a CCE Turbo cluster, pay attention to the following parameters. Other parameters of this add-on can be configured as required. For details, see Cloud Native Cluster Monitoring.
- Local Data Storage: Enable this option. The monitoring data will be stored locally. You can determine whether to connect this add-on to AOM or a third-party monitoring platform.
- Custom Metric Collection: Enable this option in this practice. If this option is not enabled, container network metrics cannot be collected.
- (Optional) Install Grafana: After installing Grafana, you can view metrics in graphs.
This parameter is only available for the add-on earlier than v3.9.0. For the add-on of v3.9.0 or later, if Grafana is required, install Grafana separately.
- Locate the CCE Network Metrics Exporter add-on and click Install.
No parameter needs to be configured for the current add-on.
- (Optional) (For Cloud Native Cluster Monitoring v3.9.0 or later, Grafana is not provided by default.) Locate the independent Grafana add-on and click Install.
If you enable Public Access, a LoadBalancer Service named grafana-oss will be created in the monitoring namespace. If the load balancer connected to the LoadBalancer Service is bound with an EIP, you can enter {EIP}:{Port} in the address box of a browser to access Grafana.
Enabling public access will allow open-source Grafana access over a public network. You are advised to evaluate security risks and create access control policies.
Monitoring Container Network Metrics
- Add the port information to the DaemonSet configuration of the CCE Network Metrics Exporter add-on.
If the add-on version is earlier than 1.3.10, you need to manually add the port information. If the add-on version is 1.3.10 or later, the port information is automatically added so you can skip this step.
kubectl edit ds -nkube-system dolphin
Add the following content to the file:... spec: containers: - name: dolphin ports: - containerPort: 10001 name: dolphin protocol: TCP ...
- Configure the pod-monitor.yaml file so that Prometheus automatically collects container network metrics.
The following shows an example:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: dolphin namespace: monitoring spec: namespaceSelector: matchNames: - kube-system jobLabel: podmonitor-dolphin podMetricsEndpoints: - interval: 15s path: /metrics port: dolphin tlsConfig: insecureSkipVerify: true selector: matchLabels: app: dolphin
Create a PodMonitor.
kubectl apply -f pod-monitor.yaml
Viewing Metrics on Prometheus
- Create an example monitoring task. For details, see Delivering a Monitoring Task.
apiVersion: crd.dolphin.io/v1 kind: MonitorPolicy metadata: name: example-task # Monitoring task name. namespace: kube-system # (Mandatory) The value must be kube-system. spec: selector: # (Optional) Backend, for example, labelSelector, monitored by the CCE Network Metrics Exporter add-on. By default, all containers on the node are monitored. matchLabels: app: nginx matchExpressions: - key: app operator: In values: - nginx podLabel: [] # (Optional) Pod label. ip4Tx: # (Optional) Whether to collect the number of sent IPv4 packets and the number of sent IPv4 bytes. This option is disabled by default. enable: true ip4Rx: # (Optional) Whether to collect the number of received IPv4 packets and the number of received IPv4 bytes. This option is disabled by default. enable: true ip4TxInternet: # (Optional) Whether to collect the number of sent IPv4 packets and the number of sent IPv4 bytes. This option is disabled by default. enable: true healthCheck: # (Optional) Whether to collect statistics about the latest health check results and the total numbers of health checks in which pods are considered healthy and of health checks in which pods are considered unhealthy. This option is disabled by default. enable: true # true false failureThreshold: 3 # (Optional) Number of health check failures that consider a pod is unhealthy. If there is one check failure, the pod is considered unhealthy. periodSeconds: 5 # (Optional) Interval between health checks, in seconds. The default value is 60. command: "" # (Optional) Health check command. The value can be ping (default), arping, or curl. ipFamilies: [""] # (Optional) Health check IP address family. The value is ipv4 by default. port: 80 # (Optional) Port number, which is mandatory when curl is used. path: "" # (Optional) HTTP API path, which is mandatory when curl is used. monitor: ip: ipReceive: aggregateType: flow # (Optional) The value can be pod (monitored by pod) or flow (monitored by flow). ipSend: aggregateType: flow # (Optional) The value can be pod (monitored by pod) or flow (monitored by flow). tcp: tcpReceive: aggregateType: flow # (Optional) The value can be pod (monitored by pod) or flow (monitored by flow). tcpSend: aggregateType: flow # (Optional) The value can be pod (monitored by pod) or flow (monitored by flow). tcpRetrans: aggregateType: flow # (Optional) The value can be pod (monitored by pod) or flow (monitored by flow). tcpRtt: aggregateType: flow # (Optional) The value can be flow (monitored by flow). The unit is μs. tcpNewConnection: aggregateType: pod # (Optional) The value can be pod (monitored by pod).
- Create a public network LoadBalancer Service so that Prometheus can be accessed from external networks.
apiVersion: v1 kind: Service metadata: name: prom-lb # Service name, which is customizable. namespace: monitoring labels: app: prometheus component: server annotations: kubernetes.io/elb.id: 038ff*** # Replace it with the ID of the public network load balancer in the VPC that the cluster belongs to. spec: ports: - name: cce-service-0 protocol: TCP port: 88 # Service port, which is customizable. targetPort: 9090 # Default Prometheus port. Retain the default value. selector: # The label selector can be adjusted based on the label of a Prometheus server instance. app.kubernetes.io/name: prometheus prometheus: server type: LoadBalancer
- After the Service is created, enter Public IP address of the load balancer:Service port in the address box of the browser to access Prometheus. You can search for supported monitoring items on Prometheus to check whether the metrics are successfully collected.
Figure 1 Accessing Prometheus
(Optional) Viewing Graphs on Grafana
- After installing Grafana in the cluster, locate the Grafana add-on on the Add-ons page and click Access.
- Enter your Grafana login account and password.
- In the navigation pane, click Explore. Select prometheus and enter the PromQL query command, for example, rate(dolphin_ip4_send_pkt_internet[5m]). In the upper right corner, click Run query to obtain the metric graph.
Figure 2 Grafana graph
- (Optional) Use common graphs as Grafana dashboards. For details, see Create a dashboard.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot