Monitoring Metrics of Master Node Components Using Prometheus
This section describes how to use Prometheus to monitor the kube-apiserver, kube-controller, kube-scheduler and etcd-server components on the master node.
Viewing the Metrics of Master Node Components in Monitoring Center
Monitoring Center can monitor the kube-apiserver component on the master node. After enabling Monitoring Center (Cloud Native Cluster Monitoring 3.5.0 or later installed) in the cluster, you can view the API metrics in the API server view (API Server View) on the dashboard.
To monitor the kube-controller, kube-scheduler, and etcd-server components, perform the following steps.
The basic container metrics do not contain the metrics of the kube-controller, kube-scheduler, and etcd-server components. After Monitoring Center reports the metrics of the three components to AOM, you will be charged. Therefore, Monitoring Center does not collect these component metrics by default.
- Log in to the CCE console and click the cluster name to access the cluster console.
- In the navigation pane, choose ConfigMaps and Secrets, switch to the monitoring namespace, and locate the persistent-user-config configuration item.
- Click Update to edit the configuration data and delete the following configuration from the serviceMonitorDisable field.
serviceMonitorDisable: - monitoring/kube-controller - monitoring/kube-scheduler - monitoring/etcd-server - monitoring/log-operator
Figure 1 Deleting the configuration
- Click OK.
- Wait for five minutes. Then, go to the AOM console, locate the AOM instance reported by the cluster on the Monitoring > Metric Monitoring page, and view the metrics of the preceding components.
Figure 2 Viewing metrics
Collecting Metrics of Master Node Components Using Self-Built Prometheus
This section describes how to collect metrics of master node components using self-built Prometheus.
- The cluster version must be 1.19 or later.
- You need to install self-built Prometheus using Helm by referring to Prometheus. You need to use prometheus-operator to manage installed Prometheus by referring to Prometheus Operator.
The Prometheus (Prometheus (EOM)) add-on is end of maintenance and does not support this function. Therefore, do not use this add-on.
- Use kubectl to connect to the cluster.
- Modify the ClusterRole of Prometheus.
kubectl edit ClusterRole prometheus -n {namespace}
Add the following content under the rules field:rules: ... - apiGroups: - proxy.exporter.k8s.io resources: - "*" verbs: ["get", "list", "watch"]
- Create a file named kube-apiserver.yaml and edit it.
vi kube-apiserver.yaml
Example file content:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: apiserver name: kube-apiserver namespace: monitoring # Change it to the namespace where Prometheus will be installed. spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 30s metricRelabelings: - action: keep regex: (aggregator_unavailable_apiservice|apiserver_admission_controller_admission_duration_seconds_bucket|apiserver_admission_webhook_admission_duration_seconds_bucket|apiserver_admission_webhook_admission_duration_seconds_count|apiserver_client_certificate_expiration_seconds_bucket|apiserver_client_certificate_expiration_seconds_count|apiserver_current_inflight_requests|apiserver_request_duration_seconds_bucket|apiserver_request_total|go_goroutines|kubernetes_build_info|process_cpu_seconds_total|process_resident_memory_bytes|rest_client_requests_total|workqueue_adds_total|workqueue_depth|workqueue_queue_duration_seconds_bucket|aggregator_unavailable_apiservice_total|rest_client_request_duration_seconds_bucket) sourceLabels: - __name__ - action: drop regex: apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50) sourceLabels: - __name__ - le port: https scheme: https tlsConfig: caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt serverName: kubernetes jobLabel: component namespaceSelector: matchNames: - default selector: matchLabels: component: apiserver provider: kubernetes
Create a ServiceMonitor:
kubectl apply -f kube-apiserver.yaml
- Create a file named kube-controller.yaml and edit it.
vi kube-controller.yaml
Example file content:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: kube-controller name: kube-controller-manager namespace: monitoring # Change it to the namespace where Prometheus will be installed. spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 15s honorLabels: true port: https relabelings: - regex: (.+) replacement: /apis/proxy.exporter.k8s.io/v1beta1/kube-controller-proxy/${1}/metrics sourceLabels: - __address__ targetLabel: __metrics_path__ - regex: (.+) replacement: ${1} sourceLabels: - __address__ targetLabel: instance - replacement: kubernetes.default.svc.cluster.local:443 targetLabel: __address__ scheme: https tlsConfig: caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt jobLabel: app namespaceSelector: matchNames: - kube-system selector: matchLabels: app: kube-controller-proxy version: v1
Create a ServiceMonitor:
kubectl apply -f kube-controller.yaml
- Create a file named kube-scheduler.yaml and edit it.
vi kube-scheduler.yaml
Example file content:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: kube-scheduler name: kube-scheduler namespace: monitoring # Change it to the namespace where Prometheus will be installed. spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 15s honorLabels: true port: https relabelings: - regex: (.+) replacement: /apis/proxy.exporter.k8s.io/v1beta1/kube-scheduler-proxy/${1}/metrics sourceLabels: - __address__ targetLabel: __metrics_path__ - regex: (.+) replacement: ${1} sourceLabels: - __address__ targetLabel: instance - replacement: kubernetes.default.svc.cluster.local:443 targetLabel: __address__ scheme: https tlsConfig: caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt jobLabel: app namespaceSelector: matchNames: - kube-system selector: matchLabels: app: kube-scheduler-proxy version: v1
Create a ServiceMonitor:
kubectl apply -f kube-scheduler.yaml
- Create a file named etcd-server.yaml and edit it.
vi etcd-server.yaml
Example file content:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: etcd-server name: etcd-server namespace: monitoring # Change it to the namespace where Prometheus will be installed. spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 15s honorLabels: true port: https relabelings: - regex: (.+) replacement: /apis/proxy.exporter.k8s.io/v1beta1/etcd-server-proxy/${1}/metrics sourceLabels: - __address__ targetLabel: __metrics_path__ - regex: (.+) replacement: ${1} sourceLabels: - __address__ targetLabel: instance - replacement: kubernetes.default.svc.cluster.local:443 targetLabel: __address__ scheme: https tlsConfig: caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt jobLabel: app namespaceSelector: matchNames: - kube-system selector: matchLabels: app: etcd-server-proxy version: v1
Create a ServiceMonitor:
kubectl apply -f etcd-server.yaml
- Access Prometheus and choose Status > Targets.
The preceding master node components are displayed.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot