Updated on 2024-09-06 GMT+08:00

Connecting MySQL Exporter

Application Scenario

MySQL Exporter collects MySQL database metrics. Core database metrics collected through Exporter are used for alarm reporting and dashboard display. Currently, Exporter supports MySQL 5.6 or later. If the MySQL version is earlier than 5.6, some metrics may fail to be collected.

You are advised to use CCE for unified Exporter management.

Prerequisites

Database Authorization

  1. Log in to the cluster and run the following command:

    kubectl exec -it ${mysql_podname} bash
    mysql -u root -p
    Figure 1 Executing the command

  2. Log in to the database and run the following command:

    CREATE USER 'exporter'@'x.x.x.x(hostip)' IDENTIFIED BY 'xxxx(password)' WITH MAX_USER_CONNECTIONS 3;
    GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'x.x.x.x(hostip)';

  3. Check whether the authorization is successful.

    Enter the following SQL statement to check whether there is any Exporter data. host indicates the IP address of the node where the MySQL database is located.

    select user,host from mysql.user;
    Figure 2 SQL statement

Deploying MySQL Exporter

  1. Log in to the CCE console.
  2. Click the connected cluster. The cluster management page is displayed.
  3. Perform the following operations to deploy Exporter:

    1. Use Secret to manage MySQL connection strings.

      In the navigation pane, choose ConfigMaps and Secrets. In the upper right corner, click Create from YAML and enter the following .yml file. The password is encrypted based on Opaque requirements.

      apiVersion: v1
      kind: Secret
      metadata:
          name: mysql-secret
          namespace: default
      type: Opaque
      stringData:
          datasource: "user:password@tcp(ip:port)/" # MySQL connection string, which needs to be encrypted.

      For details about how to configure a secret, see Creating a Secret.

    2. Deploy MySQL Exporter.

      In the navigation pane, choose Workloads. In the upper right corner, click Create Workload. Then select the Deployment workload and select a desired namespace to deploy MySQL Exporter. YAML configuration example for deploying Exporter:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          k8s-app: mysql-exporter # Change the name based on service requirements. You are advised to add the MySQL instance information, for example, ckafka-2vrgx9fd-mysql-exporter.
        name: mysql-exporter # Change the name based on service requirements. You are advised to add the MySQL instance information, for example, ckafka-2vrgx9fd-mysql-exporter.
        namespace: default # Must be the same as the namespace of MySQL.
      spec:
        replicas: 1
        selector:
          matchLabels:
            k8s-app: mysql-exporter # Change the name based on service requirements. You are advised to add the MySQL instance information, for example, ckafka-2vrgx9fd-mysql-exporter.
        template:
          metadata:
            labels:
              k8s-app: mysql-exporter # Change the name based on service requirements. You are advised to add the MySQL instance information, for example, ckafka-2vrgx9fd-mysql-exporter.
          spec:
            containers:
            - env:
              - name: DATA_SOURCE_NAME
                valueFrom:
                  secretKeyRef:
                    name: mysql-secret
                    key: datasource	
              image: swr.cn-north-4.myhuaweicloud.com/aom-exporter/mysqld-exporter:v0.12.1
              imagePullPolicy: IfNotPresent
              name: mysql-exporter
              ports:
              - containerPort: 9104
                name: metric-port 
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          imagePullSecrets:
          - name: default-secret
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: mysql-exporter
      spec:
        type: NodePort
        selector:
          k8s-app: mysql-exporter
        ports:
          - protocol: TCP
            nodePort: 30337
            port: 9104
            targetPort: 9104

      For details about Exporter parameters, see mysql-exporter.

    3. Check whether MySQL Exporter is successfully deployed.
      1. On the Deployments tab page, click the Deployment created in 3.b. In the pod list, choose More > View Logs in the Operation column. The Exporter is successfully started and its access address is exposed.
      1. Perform verification using one of the following methods:
        • Log in to a cluster node and run either of the following commands:
          curl http://{Cluster IP address}:9104/metrics
          curl http://{Private IP address of any node in the cluster}:30337/metrics
        • In the instance list, choose More > Remote Login in the Operation column and run the following command:
          curl http://localhost:9104/metric
        • Access http://{Public IP address of any node in the cluster}:30337/metrics.
          Figure 3 Accessing a cluster node

Collecting Service Data of the CCE Cluster

Add PodMonitor to configure a collection rule for monitoring the service data of applications deployed in the CCE cluster.

Configuration information:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: mysql-exporter
  namespace: default
spec:
  namespaceSelector:
    matchNames:
      - default # Namespace where Exporter is located.
  podMetricsEndpoints:
  - interval: 30s
    path: /metrics
    port: metric-port
  selector:
    matchLabels:
      k8s-app: mysql-exporter

In this example, metrics are collected every 30s. Therefore, you can check the reported metrics on the AOM page about 30s later.

Verifying that Metrics Can Be Reported to AOM

  1. Log in to the AOM 2.0 console.
  2. In the navigation pane on the left, choose Prometheus Monitoring > Instances.
  3. Click the Prometheus instance connected to the CCE cluster. The instance details page is displayed.
  4. On the Metrics tab page of the Metric Management page, select your target cluster.
  5. Select job {namespace}/mysql-exporter to query custom metrics starting with mysql.

Setting a Dashboard and Alarm Rule on AOM

By setting a dashboard, you can monitor CCE cluster data on the same screen. By setting an alarm rule, you can detect cluster faults and implement warning in a timely manner.

  • Setting a dashboard
    1. Log in to the AOM 2.0 console.
    2. In the navigation pane, choose Dashboard. On the displayed page, click Add Dashboard to add a dashboard. For details, see Creating a Dashboard.
    3. On the Dashboard page, select a Prometheus instance for CCE and click Add Graph. For details, see Adding a Graph to a Dashboard.
  • Setting an alarm rule
    1. Log in to the AOM 2.0 console.
    2. In the navigation pane, choose Alarm Management > Alarm Rules.
    3. On the Metric/Event Alarm Rules tab page, click Create to create an alarm rule. For details, see Creating a Metric Alarm Rule.