Updated on 2024-01-04 GMT+08:00

Monitoring Custom Metrics Using Prometheus

You can use AOM ICAgent to obtain custom metric data of workloads as described in Monitoring Custom Metrics on AOM. You can also install the prometheus add-on in a cluster and use Prometheus as the monitoring platform.

The following procedure uses an Nginx application as an example to describe how to use Prometheus to monitor custom metrics:

  1. Installing the Add-on

    CCE provides an add-on that integrates prometheus functions. You can install it with several clicks.

  2. Accessing Prometheus

    (Optional) Bind a LoadBalancer Service to prometheus so that prometheus can be accessed from external networks.

  3. Preparing an Application

    Prepare an application image. The application must provide a metric monitoring API for ICAgent to collect data, and the monitoring data must comply with the prometheus specifications.

  4. Monitoring Custom Metrics

    Use the application image to deploy a workload in a cluster. Custom monitoring metrics are automatically reported to Prometheus.

  5. Configuring Collection Rules for Custom Metrics

    After collection rules are configured, custom metrics are reported to the metric-server, which can be used in scenarios like workload auto scaling.

  6. Accessing Grafana

    View prometheus monitoring data on Grafana, a visualization panel.

Constraints

To use prometheus to monitor custom metrics, the application needs to provide a metric monitoring API. For details, see Prometheus Monitoring Data Collection.

Prometheus Monitoring Data Collection

Prometheus periodically calls the metric monitoring API (/metrics by default) of an application to obtain monitoring data. The application needs to provide the metric monitoring API for Prometheus to call, and the monitoring data must meet the following specifications of Prometheus:

# TYPE nginx_connections_active gauge
nginx_connections_active 2
# TYPE nginx_connections_reading gauge
nginx_connections_reading 0

Prometheus provides clients in various languages. For details about the clients, see Prometheus CLIENT LIBRARIES. For details about how to develop an exporter, see WRITING EXPORTERS. The Prometheus community provides various third-party exporters that can be directly used. For details, see EXPORTERS AND INTEGRATIONS.

Installing the Add-on

Install the add-on based on the cluster version and actual requirements.

Accessing Prometheus

After the add-on is installed, you can deploy workloads and Services. The StatefulSet named prometheus refers to Prometheus Server.

You can create a public network LoadBalancer Service so that Prometheus can be accessed from an external network.

  1. Log in to the CCE console, and click the name of the cluster with the prometheus add-on installed to access the cluster console. On the displayed page, choose Networking from the navigation pane.
  2. Click Create from YAML in the upper right corner to create a public network LoadBalancer Service.

    apiVersion: v1
    kind: Service
    metadata:
      name: prom-lb     #Service name, which can be customized.
      namespace: monitoring
      labels:
        app: prometheus
        component: server
      annotations:
        kubernetes.io/elb.id: 038ff***     #Replace it with the ID of the public network load balancer in the VPC that the cluster belongs to.
    spec:
      ports:
        - name: cce-service-0
          protocol: TCP
          port: 88     #Service port, which can be customized.
          targetPort: 9090     #Default port of Prometheus. Retain the default value.
      selector:
        app: prometheus
        component: server
        release: cceaddon-prometheus
      type: LoadBalancer

  3. After the creation, visit load balancer public IP:Service port to access Prometheus.
  4. Choose Status > Targets to view the targets monitored by prometheus.

Preparing an Application

User-developed applications must provide a metric monitoring API for ICAgent to collect data, and the monitoring data must comply with the Prometheus specifications. For details, see Prometheus Monitoring Data Collection.

This document uses Nginx as an example to describe how to collect monitoring data. There is a module named ngx_http_stub_status_module in Nginx, which provides basic monitoring functions. You can configure the nginx.conf file to provide an interface for external systems to access Nginx monitoring data.

  1. Log in to a Linux VM that can access to the Internet and run Docker commands.
  2. Create an nginx.conf file. Add the server configuration under http to enable Nginx to provide an interface for the external systems to access the monitoring data.

    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
        sendfile        on;
        #tcp_nopush     on;
        keepalive_timeout  65;
        #gzip  on;
        include /etc/nginx/conf.d/*.conf;
    
        server {
          listen 8080;
          server_name  localhost;
          location /stub_status {
             stub_status on;
             access_log off;
          }
        }
    }

  3. Use this configuration to create an image and a Dockerfile file.

    vi Dockerfile
    The content of Dockerfile is as follows:
    FROM nginx:1.21.5-alpine
    ADD nginx.conf /etc/nginx/nginx.conf
    EXPOSE 80
    CMD ["nginx", "-g", "daemon off;"]

  4. Use this Dockerfile to build an image and upload it to SWR. The image name is nginx:exporter.

    1. In the navigation pane, choose My Images. Click Upload Through Client in the upper right corner. On the page displayed, click Generate a temporary login command and click to copy the command.
    2. Run the login command copied in the previous step on the node. If the login is successful, the message "Login Succeeded" is displayed.
    3. Run the following command to build an image named nginx. The image version is exporter.
      docker build -t nginx:exporter .
    4. Tag the image and upload it to the image repository. Change the image repository address and organization name based on your requirements.
      docker tag nginx:exporter {swr-address}/{group}/nginx:exporter
      docker push {swr-address}/{group}/nginx:exporter

  5. View application metrics.

    1. Use nginx:exporter to create a workload.
    2. Access the container and use http://<ip_address>:8080/stub_status to obtain nginx monitoring data. <ip_address> indicates the IP address of the container. Information similar to the following is displayed.
      # curl http://127.0.0.1:8080/stub_status
      Active connections: 3 
      server accepts handled requests
       146269 146269 212 
      Reading: 0 Writing: 1 Waiting: 2

Monitoring Custom Metrics

The data format of the monitoring data provided by nginx:exporter does not meet the requirements of Prometheus. Convert the data format to the format required by Prometheus. To convert the format of Nginx metrics, use nginx-prometheus-exporter. Deploy nginx:exporter and nginx-prometheus-exporter in the same pod and add the following annotations during deployment. Then Prometheus can automatically collect metrics.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-exporter
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-exporter
  template:
    metadata:
      labels:
        app: nginx-exporter
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9113"
        prometheus.io/path: "/metrics"
        prometheus.io/scheme: "http"
    spec:
      containers:
        - name: container-0
          image: 'nginx:exporter'      # Replace it with the address of the image you uploaded to SWR.
          resources:
            limits:
              cpu: 250m
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 512Mi
        - name: container-1
          image: 'nginx/nginx-prometheus-exporter:0.9.0'
          command:
            - nginx-prometheus-exporter
          args:
            - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status'
      imagePullSecrets:
        - name: default-secret

In the preceding description:

  • prometheus.io/scrape indicates whether to enable Prometheus to collect pod monitoring data. The value is true.
  • prometheus.io/port indicates the port for collecting monitoring data.
  • prometheus.io/path indicates the URL of the API for collecting monitoring data. If this parameter is not set, the default value /metrics is used.
  • prometheus.io/scheme: protocol used for data collection. The value can be http or https.

After the application is deployed, a pod with a collection path of port 9113 can be found under Status > Targets.

On the Graph tab, enter nginx. The related metrics are displayed.

Configuring Collection Rules for Custom Metrics

For details about how to configure collection rules, see Metrics Discovery and Presentation Configuration. If you have upgraded the add-on, original configurations are inherited and used.

To use prometheus to monitor custom metrics, the application needs to provide a metric monitoring API. For details, see Prometheus Monitoring Data Collection.

  1. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose ConfigMaps and Secrets.
  2. Switch to the monitoring namespace, find the user-adapter-config ConfigMap (adapter-config in earlier versions) on the ConfigMaps tab, and click Update.

    Figure 1 Updating a ConfigMap

  3. In the window that slides out from the right, click Edit in the operation column of Data for the config.yaml file. Then add a custom metric collection rule under the rules field. Click OK.

    You can add multiple collection rules by adding multiple configurations under the rules field. For details, see Metrics Discovery and Presentation Configuration.

    The following is an example of customizing a collection rule for the nginx:export:
    rules:
    - seriesQuery: '{__name__=~"^nginx_.*",container!="POD",namespace!="",pod!=""}'
      resources:
        overrides:
          namespace:
            resource: namespace
          pod:
            resource: pod
      name:
        matches: (.*)
      metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)'

    The preceding example applies only to the nginx:export application in this example. If you need to collect custom metrics, add or change rules according to the official guide.

    Figure 2 Editing ConfigMap data

  4. Redeploy the custom-metrics-apiserver in the monitoring namespace.

    Figure 3 Redeploying custom-metrics-apiserver

  5. After custom-metrics-apiserver runs successfully, you can select the custom metrics reported by the nginx:export application when creating an HPA policy. For details, see HPA Policy.

    Figure 4 Creating an HPA policy using custom metrics

Accessing Grafana

The prometheus add-on has Grafana (an open-source visualization tool) installed and interconnected with Prometheus. You can create a public network LoadBalancer Service so that you can access Grafana from the public network and view Prometheus monitoring data on Grafana.

Click the access address to access Grafana and select a proper dashboard to view the aggregated content.

  1. Log in to the CCE console, and click the name of the cluster with the prometheus add-on installed to access the cluster console. On the displayed page, choose Networking from the navigation pane.
  2. Click Create from YAML in the upper right corner to create a public network LoadBalancer Service for Grafana.

    apiVersion: v1
    kind: Service
    metadata:
      name: grafana-lb     #Service name, which can be customized.
      namespace: monitoring
      labels:
        app: grafana
      annotations:
        kubernetes.io/elb.id: 038ff***     #Replace it with the ID of the public network load balancer in the VPC that the cluster belongs to.
    spec:
      ports:
        - name: cce-service-0
          protocol: TCP
          port: 80     #Service port, which can be customized.
          targetPort: 3000     #Default port of Grafana. Retain the default value.
      selector:
        app: grafana
      type: LoadBalancer

  3. After the creation, visit load balancer public IP:Service port to access Grafana and select a proper dashboard to view the aggregated data.

Appendix: Grafana data persistence

If Grafana data is not persistent, the data may be lost when Grafana container is restarted. You can mount cloud storage to the Grafana container to achieve Grafana data persistence.

  1. Use kubectl to connect to the cluster where Grafana is located..
  2. Create the PVC of an EVS disk.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: grafana-pvc
      namespace: monitoring
      annotations:
        everest.io/disk-volume-type: SSD
      labels:
        failure-domain.beta.kubernetes.io/region: ae-ad-1
        failure-domain.beta.kubernetes.io/zone: 
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-disk

    The EVS disk and the node where Grafana resides must be in the same AZ. Otherwise, the EVS disk cannot be attached.

    • failure-domain.beta.kubernetes.io/region: region where the EVS disk resides.
    • failure-domain.beta.kubernetes.io/zone: AZ where the EVS disk resides.
    • storage: EVS disk size. Set this parameter as required.

    You can also create EVS disks on the CCE console. For details, see Automatically Creating an EVS Disk.

  3. Modify the Grafana workload configuration and mount the EVS disk.

    kubectl edit deploy grafana -n monitoring

    Add the EVS disk to the container in the YAML file, as shown in the following figure. The PVC name must be the same as that in 2, and the mount path must be /var/lib/grafana.

    In addition, the upgrade policy must be modified for the Grafana workload. The maximum number of pods is 1.

    ...
      template:
        spec:
          volumes:
            - name: cce-pvc-grafana
              persistentVolumeClaim:
                claimName: grafana-pvc
    ...
          containers:
            - volumeMounts:
                - name: cce-pvc-grafana
                  mountPath: /var/lib/grafana
    ...
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
          maxSurge: 1

    Save the configuration. The Grafana workload will be upgraded and the EVS disk will be mounted.