Monitoring by Using the prometheus Add-on
You can use AOM ICAgent to obtain custom metric data of workloads as described in Custom Monitoring. You can also install the prometheus add-on in a cluster and use Prometheus as the monitoring platform.
Installing the Add-on
For details about how to install the prometheus add-on, see prometheus.
Accessing Prometheus
After the prometheus add-on is installed, you can deploy a series of workloads and Services. The Prometheus StatefulSet refers to Prometheus Server.
You can create a public network LoadBalancer Service so that Prometheus can be accessed from an external network.
After the creation is complete, click the access address to access Prometheus.
Choose Status > Targets to view the targets monitored by Prometheus.
Monitoring Custom Metrics
Custom metrics can also be monitored in Prometheus. The configuration method is simple. For example, for the nginx:exporter application in Custom Monitoring, you only need to add the following annotations during deployment. Then Prometheus can automatically collect metrics.
kind: Deployment apiVersion: apps/v1 metadata: name: nginx-exporter namespace: default spec: replicas: 1 selector: matchLabels: app: nginx-exporter template: metadata: labels: app: nginx-exporter annotations: prometheus.io/scrape: "true" prometheus.io/port: "9113" prometheus.io/path: "/metrics" prometheus.io/scheme: "http" spec: containers: - name: container-0 image: '{swr-address}/{group}/nginx:exporter' resources: limits: cpu: 250m memory: 512Mi requests: cpu: 250m memory: 512Mi - name: container-1 image: 'nginx/nginx-prometheus-exporter:0.9.0' command: - nginx-prometheus-exporter args: - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status' imagePullSecrets: - name: default-secret
In the preceding description:
- prometheus.io/scrape indicates whether to enable Prometheus to collect pod monitoring data. The value is true.
- prometheus.io/port indicates the port for collecting monitoring data.
- prometheus.io/path indicates the URL of the API for collecting monitoring data. If this parameter is not set, the default value /metrics is used.
- prometheus.io/scheme: protocol used for data collection. The value can be http or https.
After the application is deployed, a pod with a collection path of port 9113 can be found under Status > Targets.
On the Graph tab page, enter nginx. The Nginx-related metrics are displayed in Prometheus.
Accessing Grafana
The prometheus add-on has Grafana (an open-source visualization tool) installed and interconnected with Prometheus. You can create a public network LoadBalancer Service so that you can access Grafana from the public network and view Prometheus monitoring data on Grafana.
Click the access address to access Grafana and select a proper dashboard to view the aggregated content.
Grafana Data Persistence
Currently, Grafana data in the prometheus add-on is not persistent. If the Grafana container is restarted, the data will be lost. You can mount cloud storage to the Grafana container to achieve Grafana data persistence.
- Use kubectl to connect to the cluster where the Grafana cluster resides. For details, see Connecting to a Cluster Using kubectl.
- Create the PVC of an EVS disk.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pvc namespace: monitoring annotations: everest.io/disk-volume-type: SSD labels: failure-domain.beta.kubernetes.io/region: eu-west-101 failure-domain.beta.kubernetes.io/zone: eu-west-101a spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: csi-disk
The EVS disk and the node where Grafana resides must be in the same AZ. Otherwise, the EVS disk cannot be attached.
- failure-domain.beta.kubernetes.io/region: region where the EVS disk resides.
- failure-domain.beta.kubernetes.io/zone: AZ where the EVS disk resides.
- storage: EVS disk size. Set this parameter as required.
You can also create EVS disks on the CCE console. For details, see Using a Storage Class to Create a PVC.
- Modify the Grafana workload configuration and mount the EVS disk.
kubectl edit deploy grafana -n monitoring
Add the EVS disk to the container in the YAML file, as shown in the following figure. The PVC name must be the same as that in 2, and the mount path must be /var/lib/grafana.
In addition, the upgrade policy must be modified for the Grafana workload. The maximum number of pods is 1.
... template: spec: volumes: - name: cce-pvc-grafana persistentVolumeClaim: claimName: grafana-pvc ... containers: - volumeMounts: - name: cce-pvc-grafana mountPath: /var/lib/grafana ... strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1
Save the configuration. The Grafana workload will be upgraded and the EVS disk will be mounted.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.