Connecting Kafka Exporter
Application Scenario
When using Kafka, you need to monitor their running, for example, checking the cluster status and whether messages are stacked. The Prometheus monitoring function monitors Kafka running using Exporter in the CCE container scenario. This section describes how to deploy Kafka Exporter and implement alarm access.
You are advised to use CCE for unified Exporter management.
Prerequisites
- A CCE cluster has been created and Kafka has been installed.
- Your service has been connected for Prometheus monitoring and a CCE cluster has also been connected. For details, see Prometheus Instance for CCE.
- You have uploaded the kafka_exporter image to SoftWare Repository for Container (SWR). For details, see Uploading an Image Through a Container Engine Client.
Deploying Kafka Exporter
- Log in to the CCE console.
- Click the connected cluster. The cluster management page is displayed.
- Perform the following operations to deploy Exporter:
- Deploy Kafka Exporter.
In the navigation pane, choose Workloads. In the upper right corner, click Create Workload. Then select the Deployment workload and select a desired namespace to deploy Kafka Exporter. YAML configuration example for deploying Exporter:
apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: kafka-exporter # Change the name based on service requirements. You are advised to add the Kafka instance information, for example, ckafka-2vrgx9fd-kafka-exporter. name: kafka-exporter # Change the name based on service requirements. You are advised to add the Kafka instance information, for example, ckafka-2vrgx9fd-kafka-exporter. namespace: default # Namespace of an existing cluster spec: replicas: 1 selector: matchLabels: k8s-app: kafka-exporter # Change the name based on service requirements. You are advised to add the Kafka instance information, for example, ckafka-2vrgx9fd-kafka-exporter. template: metadata: labels: k8s-app: kafka-exporter # Change the name based on service requirements. You are advised to add the Kafka instance information, for example, ckafka-2vrgx9fd-kafka-exporter. spec: containers: - args: - --kafka.server=120.46.215.4:30092 # Address of the Kafka instance image: swr.cn-north-4.myhuaweicloud.com/mall-swarm-demo/kafka-exporter:latest imagePullPolicy: IfNotPresent name: kafka-exporter ports: - containerPort: 9308 name: metric-port # Required when you configure a collection task securityContext: privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: default-secret restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: kafka-exporter spec: type: NodePort selector: k8s-app: kafka-exporter ports: - protocol: TCP nodePort: 30091 port: 9308 targetPort: 9308
For more details about Exporter parameters, see kafka-exporter.
- Check whether Kafka Exporter is successfully deployed.
- On the Deployments tab page, click the Deployment created in 3.a. In the pod list, choose More > View Logs in the Operation column. The Exporter is successfully started and its access address is exposed.
- Perform verification using one of the following methods:
- Log in to a cluster node and run either of the following commands:
curl http://{Cluster IP address}:9308/metrics
curl http://{Private IP address of any node in the cluster}:30091/metrics
- In the instance list, choose More > Remote Login in the Operation column and run the following command:
curl http://localhost:9308/metric
- Access http://{Public IP address of any node in the cluster}:30091/metrics.
Figure 1 Accessing a cluster node
- Log in to a cluster node and run either of the following commands:
- Deploy Kafka Exporter.
Collecting Service Data of the CCE Cluster
Add PodMonitor to configure a collection rule for monitoring the service data of applications deployed in the CCE cluster.
In the following example, metrics are collected every 30s. Therefore, you can check the reported metrics on the AOM page about 30s later.
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: kafka-exporter namespace: default spec: namespaceSelector: matchNames: - default # Namespace where Exporter is located. podMetricsEndpoints: - interval: 30s path: /metrics port: metric-port selector: matchLabels: k8s-app: kafka-exporter
Verifying that Metrics Can Be Reported to AOM
- Log in to the AOM 2.0 console.
- In the navigation pane on the left, choose Prometheus Monitoring > Instances.
- Click the Prometheus instance connected to the CCE cluster. The instance details page is displayed.
- On the Metrics tab page of the Metric Management page, select your target cluster.
- Select job {namespace}/kafka-exporter to query custom metrics starting with kafka.
Setting a Dashboard and Alarm Rule on AOM
By setting a dashboard, you can monitor CCE cluster data on the same screen. By setting an alarm rule, you can detect cluster faults and implement warning in a timely manner.
- Setting a dashboard
- Log in to the AOM 2.0 console.
- In the navigation pane, choose Dashboard. On the displayed page, click Add Dashboard to add a dashboard. For details, see Creating a Dashboard.
- On the Dashboard page, select a Prometheus instance for CCE and click Add Graph. For details, see Adding a Graph to a Dashboard.
- Setting an alarm rule
- Log in to the AOM 2.0 console.
- In the navigation pane, choose Alarm Management > Alarm Rules.
- On the Metric/Event Alarm Rules tab page, click Create to create an alarm rule. For details, see Creating a Metric Alarm Rule.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot