Connecting MongoDB Exporter to AOM for Monitoring Metrics
Scenario
When using MongoDB, you need to monitor MongoDB running and locate their faults in a timely manner. The Prometheus monitoring function monitors MongoDB running using Exporter in the CCE container scenario. This section describes how to deploy MongoDB Exporter and implement alarm access.
Constraints
You are advised to use CCE for unified Exporter management.
Prerequisites
- A CCE cluster has been created and MongoDB has been installed.
- Your service has been connected for Prometheus monitoring and a CCE cluster has also been connected. For details, see Prometheus Instance for CCE.
- You have uploaded the mongodb_exporter image to SoftWare Repository for Container (SWR). For details, see Uploading an Image Through a Container Engine Client.
Deploying MongoDB Exporter in a CCE Cluster
- Log in to the CCE console.
- Click the connected cluster. The cluster management page is displayed.
- Perform the following operations to deploy Exporter:
- Configure a secret.
In the navigation pane, choose ConfigMaps and Secrets. Then click Create from YAML in the upper right corner of the page. For details about how to configure a secret, see Creating a Secret.
YAML configuration example (password encrypted using Opaque):
apiVersion: v1 kind: Secret metadata: name: mongodb-secret-test namespace: default type: Opaque stringData: datasource: "mongodb://{user}:{passwd}@{host1}:{port1},{host2}:{port2},{host3}:{port3}/admin" # Corresponding URI.
- Deploy MongoDB Exporter.
In the navigation pane, choose Workloads. In the upper right corner, click Create Workload. Then select the Deployment workload and select a desired namespace to deploy MongoDB Exporter.
The following shows the YAML used to deploy Exporter. For parameters, see mongodb_exporter.apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: mongodb-exporter # Change the value based on service requirements. You are advised to add the MongoDB instance information. name: mongodb-exporter # Change the name based on service requirements. You are advised to add the MongoDB instance information. namespace: default # Must be the same as the namespace of MongoDB. spec: replicas: 1 selector: matchLabels: k8s-app: mongodb-exporter # Change the value based on service requirements. You are advised to add the MongoDB instance information. template: metadata: labels: k8s-app: mongodb-exporter # Change the value based on service requirements. You are advised to add the MongoDB instance information. spec: containers: - args: - --collect.database # Enable collection of database metrics. - --collect.collection # Enable collection of metric sets. - --collect.topmetrics # Enable collection of database header metrics. - --collect.indexusage # Enable collection of index usage statistics. - --collect.connpoolstats # Enable collection of MongoDB connection pool statistics. env: - name: MONGODB_URI valueFrom: secretKeyRef: name: mongodb-secret-test key: datasource image: swr.cn-north-4.myhuaweicloud.com/mall-swarm-demo/mongodb-exporter:0.10.0 imagePullPolicy: IfNotPresent name: mongodb-exporter ports: - containerPort: 9216 name: metric-port # Required when you configure a collection task. securityContext: privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: default-secret restartPolicy: Always schedulerName: default-scheduler securityContext: { } terminationGracePeriodSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: mongodb-exporter spec: type: NodePort selector: k8s-app: mongodb-exporter ports: - protocol: TCP nodePort: 30003 port: 9216 targetPort: 9216
- Check whether MongoDB Exporter is successfully deployed.
- On the Deployments tab page, click the Deployment created in 3.b. In the pod list, choose More > View Logs in the Operation column. The Exporter is successfully started and its access address is exposed.
- Run the following commands to check whether MongoDB Exporter is successfully deployed. If metric data is returned, it is successfully deployed. Perform verification using one of the following methods:
- Log in to a cluster node and run either of the following commands:
curl http://{Cluster IP address}:9216/metrics
curl http://{Private IP address of any node in the cluster}:30003/metrics
- Access http://{Public IP address of any node in the cluster}:30003/metrics.
Figure 1 Access address
- In the instance list, choose More > Remote Login in the Operation column and run the following command:
curl http://localhost:9216/metric
- Log in to a cluster node and run either of the following commands:
- Configure a secret.
Configuring a CCE Cluster Metric Collection Rule
Add PodMonitor to configure a Prometheus collection rule for monitoring the service data of applications deployed in the CCE cluster.
- Log in to the AOM 2.0 console.
- In the navigation pane on the left, choose Prometheus Monitoring > Instances.
- In the instance list, click a Prometheus instance for CCE.
- In the navigation pane on the left, choose Metric Management. On the Settings tab page, click PodMonitor.
- Click Add PodMonitor. In the displayed dialog box, set parameters and click OK.
In the following example, metrics are collected every 30s. Therefore, you can check the reported metrics on the AOM page about 30s later.
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: mongodb-exporter namespace: default spec: namespaceSelector: # Select the namespace where the target Exporter pod is located. matchNames: - default # Namespace where Exporter is located. podMetricsEndpoints: - interval: 30s # Set the metric collection period. path: /metrics # Enter the path corresponding to Prometheus Exporter. Default: /metrics. port: metric-port# Enter the name of ports in the YAML file corresponding to Prometheus Exporter. selector: # Enter the label of the target Exporter pod. matchLabels: k8s-app: mongodb-exporter
Verifying that CCE Cluster Metrics Can Be Reported to AOM
- Log in to the AOM 2.0 console.
- In the navigation pane on the left, choose Prometheus Monitoring > Instances.
- Click the Prometheus instance connected to the CCE cluster. The instance details page is displayed.
- On the Metrics tab page of the Metric Management page, select your target cluster.
- Select job {namespace}/MongoDB-exporter to query custom metrics starting with mongodb.
Setting a Dashboard and Alarm Rule on AOM
By setting a dashboard, you can monitor CCE cluster data on the same screen. By setting an alarm rule, you can detect cluster faults and implement warning in a timely manner.
- Setting a dashboard
- Log in to the AOM 2.0 console.
- In the navigation pane, choose Dashboard > Dashboard. On the displayed page, click Add Dashboard to add a dashboard. For details, see Creating a Dashboard.
- On the Dashboard page, select a Prometheus instance for CCE and click Add Graph. For details, see Adding a Graph to a Dashboard.
- Setting an alarm rule
- Log in to the AOM 2.0 console.
- In the navigation pane, choose Alarm Center > Alarm Rules.
- On the Prometheus Monitoring tab page, click Create Alarm Rule to create an alarm rule. For details, see Creating a Metric Alarm Rule.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot