Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine/ User Guide/ O&M/ O&M Best Practices/ Monitoring Custom Metrics Using Cloud Native Cluster Monitoring

Monitoring Custom Metrics Using Cloud Native Cluster Monitoring

Updated on 2025-02-18 GMT+08:00

CCE provides the Cloud Native Cluster Monitoring add-on to monitor custom metrics using Prometheus.

The following procedure uses an Nginx application as an example to describe how to use Prometheus to monitor custom metrics:

  1. Installing and Accessing Cloud Native Cluster Monitoring

    CCE provides an add-on that integrates Prometheus functions. You can install it with several clicks.

  2. Preparing an Application

    Prepare an application image. The application must provide a metric monitoring API for Prometheus to collect data, and the monitoring data must comply with the Prometheus specifications.

  3. Monitoring Custom Metrics

    Use the application image to deploy a workload in a cluster. Custom metrics will be automatically reported to Prometheus.

    Use one of the following methods to monitor custom metrics:

Custom Metric Billing

After Cloud Native Cluster Monitoring is interconnected with AOM, metrics will be reported to the AOM instance you select. Basic metrics can be monitored for free, but custom metrics are billed based on the standard pricing of AOM. For details, see Pricing Details.

Prometheus Monitoring Data Collection

Prometheus periodically calls the metric monitoring API (/metrics by default) of an application to obtain monitoring data. The application needs to provide the metric monitoring API for Prometheus to call, and the monitoring data must meet the following specifications of Prometheus:

# TYPE nginx_connections_active gauge
nginx_connections_active 2
# TYPE nginx_connections_reading gauge
nginx_connections_reading 0

Prometheus provides clients in various languages. For details about the clients, see Prometheus CLIENT LIBRARIES. For details about how to develop an exporter, see WRITING EXPORTERS. The Prometheus community provides various third-party exporters that can be directly used. For details, see EXPORTERS AND INTEGRATIONS.

Constraints

  • To use Prometheus to monitor custom metrics, the application needs to provide a metric monitoring API. For details, see Prometheus Monitoring Data Collection.
  • Currently, metrics in the kube-system and monitoring namespaces cannot be collected when pod and service annotations are used. To collect metrics in the two namespaces, use PodMonitor and ServiceMonitor.
  • The nginx/nginx-prometheus-exporter:0.9.0 image is pulled for the Nginx application. You need to add an EIP for the node where the application is deployed or upload the image to SWR to prevent application deployment failures.

Installing and Accessing Cloud Native Cluster Monitoring

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Add-ons. On the displayed page, locate Cloud Native Cluster Monitoring and click Install. In addition to the monitoring capabilities, this add-on interconnects monitoring data with Monitoring Center.

    When installing this add-on, pay attention to the following configurations. Configure other parameters as required. For details, see Cloud Native Cluster Monitoring.
    • For 3.8.0 or later, enable custom metric collection.

    • For 3.8.0 or earlier, do not enable custom metric collection.

  3. After this add-on is installed, deploy workloads and Services. The Prometheus server will be deployed as a StatefulSet in the monitoring namespace.

    You can create a public network LoadBalancer Service so that Prometheus can be accessed from external networks.

    1. Log in to the CCE console and click the name of the cluster with Prometheus installed to access the cluster console. In the navigation pane, choose Services & Ingresses.
    2. Click Create from YAML in the upper right corner to create a public network LoadBalancer Service.
      apiVersion: v1
      kind: Service
      metadata:
        name: prom-lb     # Service name, which is customizable.
        namespace: monitoring
        labels:
          app: prometheus
          component: server
        annotations:
          kubernetes.io/elb.id: 038ff***     # Replace 038ff*** with the ID of the public network load balancer in the VPC that the cluster belongs to.
      spec:
        ports:
          - name: cce-service-0
            protocol: TCP
            port: 88             # Service port, which is customizable.
            targetPort: 9090     # Default Prometheus port. Retain the default value.
        selector:                # The label selector can be adjusted based on the label of a Prometheus server instance.
          app.kubernetes.io/name: prometheus
          prometheus: server
        type: LoadBalancer
    3. After the Service is created, enter Public IP address of the load balancer:Service port in the address box of the browser to access Prometheus.
      Figure 1 Accessing Prometheus

Preparing an Application

User-developed applications must provide a metric monitoring API, and the monitoring data must comply with the Prometheus specifications. For details, see Prometheus Monitoring Data Collection.

This section uses Nginx as an example to describe how to collect monitoring data. There is a module named ngx_http_stub_status_module in Nginx, which provides basic monitoring functions. You can configure the nginx.conf file to provide an interface for external systems to access Nginx monitoring data.

  1. Log in to a Linux VM that can access to the Internet and run Docker commands.
  2. Create an nginx.conf file. Add the server configuration under http to enable Nginx to provide an interface for the external systems to access the monitoring data.

    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
        sendfile        on;
        #tcp_nopush     on;
        keepalive_timeout  65;
        #gzip  on;
        include /etc/nginx/conf.d/*.conf;
    
        server {
          listen 8080;
          server_name  localhost;
          location /stub_status {
             stub_status on;
             access_log off;
          }
        }
    }

  3. Use this configuration to create an image and a Dockerfile file.

    vi Dockerfile
    The content of Dockerfile is as follows:
    FROM nginx:1.21.5-alpine
    ADD nginx.conf /etc/nginx/nginx.conf
    EXPOSE 80
    CMD ["nginx", "-g", "daemon off;"]

  4. Use this Dockerfile to create an image and upload it to SWR. The image name is nginx:exporter. For details about how to upload an image, see Uploading an Image Through a Container Engine Client.

    1. In the navigation pane, choose My Images. In the upper right corner, click Upload Through Client. In the displayed dialog box, click Generate a temporary login command. Then, click to copy the command.
    2. Run the login command copied in the previous step on the node. If the login is successful, the message "Login Succeeded" is displayed.
    3. Run the following command to build an image named nginx. The image version is exporter.
      docker build -t nginx:exporter .
    4. Tag the image and upload it to the image repository. Change the image repository address and organization name based on your requirements.
      docker tag nginx:exporter swr.ap-southeast-1.myhuaweicloud.com/dev-container/nginx:exporter
      docker push swr.ap-southeast-1.myhuaweicloud.com/dev-container/nginx:exporter

  5. View application metrics.

    1. Use nginx:exporter to create a workload.
    2. Access the container and use http://<ip_address>:8080/stub_status to obtain nginx monitoring data. <ip_address> indicates the IP address of the container. Information similar to the following is displayed.
      # curl http://127.0.0.1:8080/stub_status
      Active connections: 3 
      server accepts handled requests
       146269 146269 212 
      Reading: 0 Writing: 1 Waiting: 2

Method 1: Configuring Pod Annotations

When the annotation settings of pods comply with the Prometheus data collection rules, Prometheus automatically collects the metrics exposed by the pods.

The format of the monitoring data provided by nginx:exporter does not meet the requirements of Prometheus. Convert the data format to the format required by Prometheus. To convert the format of Nginx metrics, use nginx-prometheus-exporter. Deploy nginx:exporter and nginx-prometheus-exporter in the same pod and add the following annotations during deployment. Then Prometheus can automatically collect metrics.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-exporter
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-exporter
  template:
    metadata:
      labels:
        app: nginx-exporter
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9113"
        prometheus.io/path: "/metrics"
        prometheus.io/scheme: "http"
    spec:
      containers:
        - name: container-0
          image: 'nginx:exporter'      # Replace it with the address of the image you uploaded to SWR.
          resources:
            limits:
              cpu: 250m
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 512Mi
        - name: container-1
          image: 'nginx/nginx-prometheus-exporter:0.9.0'
          command:
            - nginx-prometheus-exporter
          args:
            - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status'
      imagePullSecrets:
        - name: default-secret

Where,

  • prometheus.io/scrape indicates whether to enable Prometheus to collect pod monitoring data. The value is true.
  • prometheus.io/port indicates the port for collecting monitoring data, which varies depending on the application. In this example, the port is 9113.
  • prometheus.io/path indicates the URL of the API for collecting monitoring data. If this parameter is not set, the default value /metrics is used.
  • prometheus.io/scheme: protocol used for data collection. The value can be http or https.

After the application is successfully deployed, access Prometheus to query custom metrics by job name.

The custom metrics related to Nginx can be queried. In the following, the job name indicates that the metrics are reported based on the pod configuration.

nginx_connections_accepted{cluster="2048c170-8359-11ee-9527-0255ac1000cf", cluster_category="CCE", cluster_name="cce-test", container="container-0", instance="10.0.0.46:9113", job="monitoring/kubernetes-pods", kubernetes_namespace="default", kubernetes_pod="nginx-exporter-77bf4d4948-zsb59", namespace="default", pod="nginx-exporter-77bf4d4948-zsb59", prometheus="monitoring/server"}

Method 2: Configuring Service Annotations

When the annotation settings of Services comply with the Prometheus data collection rules, Prometheus automatically collects the metrics exposed by the Services.

You can use Service annotations in the same way as pod annotations. However, their application scenarios are different. Pod annotations focus on pod resource usage metrics while Service annotations focus on metrics such as requests for a Service.

The following is an example configuration:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
        - name: container-0
          image: 'nginx:exporter'      # Replace it with the address of the image you uploaded to SWR.
          resources:
            limits:
              cpu: 250m
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 512Mi
        - name: container-1
          image: 'nginx/nginx-prometheus-exporter:0.9.0'
          command:
            - nginx-prometheus-exporter
          args:
            - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status'
      imagePullSecrets:
        - name: default-secret

The following is an example Service configuration:

apiVersion: v1
kind: Service
metadata:
  name: nginx-test
  labels:
    app: nginx-test
  namespace: default
  annotations: 
    prometheus.io/scrape: "true"  # Value true indicates that service discovery is enabled.
    prometheus.io/port: "9113"  # Set it to the port on which metrics are exposed.
    prometheus.io/path: "/metrics" # Enter the URI path under which metrics are exposed. Generally, the value is /metrics.
spec:
  selector:
    app: nginx-test
  externalTrafficPolicy: Cluster
  ports:
    - name: cce-service-0
      targetPort: 80
      nodePort: 0
      port: 8080
      protocol: TCP
    - name: cce-service-1
      protocol: TCP
      port: 9113
      targetPort: 9113
  type: NodePort

After the application is successfully deployed, access Prometheus to query custom metrics. In the following, the Service name indicates the metrics are reported based on the Service configuration.

nginx_connections_accepted{app="nginx-test", cluster="2048c170-8359-11ee-9527-0255ac1000cf", cluster_category="CCE", cluster_name="cce-test", instance="10.0.0.38:9113", job="nginx-test", kubernetes_namespace="default", kubernetes_service="nginx-test", namespace="default", pod="nginx-test-78cfb65889-gtv7z", prometheus="monitoring/server", service="nginx-test"}

Method 3: Configuring PodMonitor

Cloud Native Cluster Monitoring allows you to configure metric collection tasks based on PodMonitor and ServiceMonitor. Prometheus Operator watches PodMonitor. The reload mechanism of Prometheus is used to trigger a hot update of the Prometheus collection tasks to the Prometheus instance.

To use CRDs defined by Prometheus Operator on GitHub, visit https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/charts/crds/crds.

The following is an example configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test2
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test2
  template:
    metadata:
      labels:
        app: nginx-test2
    spec:
      containers:
      - image: nginx:exporter     # Replace it with the address of the image you uploaded to SWR.
        name: container-0
        ports:
        - containerPort: 9113      # Port on which metrics are exposed.
          name: nginx-test2        # Application name used when PodMonitor is configured.
          protocol: TCP
        resources:
          limits:
            cpu: 250m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
      - name: container-1
        image: 'nginx/nginx-prometheus-exporter:0.9.0'
        command:
          - nginx-prometheus-exporter
        args:
          - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status'
      imagePullSecrets:
        - name: default-secret

The following is an example PodMonitor configuration:

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: podmonitor-nginx   # PodMonitor name
  namespace: monitoring    # Namespace that PodMonitor belongs to. monitoring is recommended.
spec:
  namespaceSelector:       # An selector matching the namespace where the workload is located
    matchNames:
    - default              # Namespace that the workload belongs to
  jobLabel: podmonitor-nginx
  podMetricsEndpoints:
  - interval: 15s 
    path: /metrics            # Path under which metrics are exposed by the workload
    port: nginx-test2         # Port on which metrics are exposed by the workload
    tlsConfig:
      insecureSkipVerify: true
  selector:  
    matchLabels:
      app: nginx-test2   # Label carried by the pod, which can be selected by the selector

After the application is successfully deployed, access Prometheus to query custom metrics. In the following, the job name indicates the metrics are reported based on the PodMonitor configuration.

nginx_connections_accepted{cluster="2048c170-8359-11ee-9527-0255ac1000cf", cluster_category="CCE", cluster_name="cce-test", container="container-0", endpoint="nginx-test2", instance="10.0.0.44:9113", job="monitoring/podmonitor-nginx", namespace="default", pod="nginx-test2-746b7f8fdd-krzfp", prometheus="monitoring/server"}

Method 4: Configuring ServiceMonitor

Cloud Native Cluster Monitoring allows you to configure metric collection tasks based on PodMonitor and ServiceMonitor. Prometheus Operator watches ServiceMonitor. The reload mechanism of Prometheus is used to trigger a hot update of the Prometheus collection tasks to the Prometheus instance.

To use CRDs defined by Prometheus Operator on GitHub, visit https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/charts/crds/crds.

The following is an example configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test3
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test3
  template:
    metadata:
      labels:
        app: nginx-test3
    spec:
      containers:
      - image: nginx:exporter        # Replace it with the address of the image you uploaded to SWR.
        name: container-0
        resources:
          limits:
            cpu: 250m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
      - name: container-1
        image: 'nginx/nginx-prometheus-exporter:0.9.0'
        command:
          - nginx-prometheus-exporter
        args:
          - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status'
      imagePullSecrets:
        - name: default-secret

The following is an example Service configuration:

apiVersion: v1
kind: Service
metadata:
  name: nginx-test3
  labels:
    app: nginx-test3
  namespace: default
spec:
  selector:
    app: nginx-test3
  externalTrafficPolicy: Cluster
  ports:
    - name: cce-service-0
      targetPort: 80
      nodePort: 0
      port: 8080
      protocol: TCP
    - name: servicemonitor-ports
      protocol: TCP
      port: 9113
      targetPort: 9113
  type: NodePort

The following is an example ServiceMonitor configuration:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: servicemonitor-nginx
  namespace: monitoring
spec:
  # Configure the name of the port on which metrics are exposed.
  endpoints:
  - path: /metrics
    port: servicemonitor-ports
  jobLabel: servicemonitor-nginx
  # Application scope of a collection task. If this parameter is not set, the default value default is used.
  namespaceSelector:
    matchNames:
    - default
  selector:
    matchLabels:
      app: nginx-test3

After the application is successfully deployed, access Prometheus to query custom metrics. In the following, the endpoint name indicates the metrics are reported based on the ServiceMonitor configuration.

nginx_connections_accepted{cluster="2048c170-8359-11ee-9527-0255ac1000cf", cluster_category="CCE", cluster_name="cce-test", endpoint="servicemonitor-ports", instance="10.0.0.47:9113", job="nginx-test3", namespace="default", pod="nginx-test3-6f8bccd9-f27hv", prometheus="monitoring/server", service="nginx-test3"}

Method 5: Configuring AdditionalScrapeConfigs

NOTICE:

Cloud Native Cluster Monitoring 3.10.1 or later has been installed.

AdditionalScrapeConfigs allows you to specify a secret key to attach your additional Prometheus scrape configuration to Cloud Native Cluster Monitoring.

This mechanism bypasses the common scrape configuration generation logic and directly transfers the configuration to Prometheus. Therefore, you need to ensure that the configuration is correct. You are advised to refer to the official scrape_config documentation.

  1. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Use YAML to create the following secret:

    kind: Secret
    apiVersion: v1
    type: Opaque
    metadata:
      name: additional-scrape-configs
      namespace: monitoring  # monitoring is only an example. The namespace must be the same as that of Cloud Native Cluster Monitoring.
    stringData:
      # The following is a metric collection example of Cloud Native Cluster Monitoring without local data storage. You need to replace the settings as needed.
      prometheus-additional.yaml: |-
        - job_name: custom-job-test
          metrics_path: /metrics
          relabel_configs:
          - action: keep
            source_labels:
            - __meta_kubernetes_pod_label_app
            - __meta_kubernetes_pod_labelpresent_app
            regex: (prometheus-lightweight);true
          - action: keep
            source_labels:
            - __meta_kubernetes_pod_container_port_name
            regex: web
          kubernetes_sd_configs:
          - role: pod
            namespaces:
              names:
              - monitoring

  3. Edit the persistent-user-config configuration item to enable AdditionalScrapeConfigs.

    kubectl edit configmap persistent-user-config -n monitoring

    Add --common.prom.default-additional-scrape-configs-key=prometheus-additional.yaml under operatorConfigOverride to enable AdditionalScrapeConfigs as follows:

    ...
    data:
      lightweight-user-config.yaml: |
        customSettings:
          additionalScrapeConfigs: []
          agentExtraArgs: []
          metricsDeprecated:
            globalDeprecateMetrics: []
          nodeExporterConfigOverride: []
          operatorConfigOverride: 
          - --common.prom.default-additional-scrape-configs-key=prometheus-additional.yaml
    ...

  4. Go to the Grafana or AOM page to check whether your custom metrics are collected.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback