ALM-38008 Abnormal Kafka Data Directory Status
Description
The system checks the Kafka data directory status every 60 seconds. This alarm is generated when the system detects that the status of a data directory is abnormal.
Trigger Count is set to 1. This alarm is cleared when the data directory status becomes normal.
Attribute
Alarm ID |
Alarm Severity |
Automatically Cleared |
---|---|---|
38008 |
Major |
Yes |
Parameters
Name |
Meaning |
---|---|
Source |
Specifies the cluster for which the alarm is generated. |
ServiceName |
Specifies the service for which the alarm is generated. |
RoleName |
Specifies the role for which the alarm is generated. |
HostName |
Specifies the host name for which the alarm is generated. |
DirName |
Specifies the directory name for which the alarm is generated. |
Trigger Condition |
Specifies the condition that the Kafka data directory status is abnormal. |
Impact on the System
If the Kafka data directory status is abnormal, the current replicas of all partitions in the data directory are brought offline, and the data directory status of multiple nodes is abnormal at the same time. As a result, some partitions may become unavailable.
Possible Causes
- The data directory permission is tampered with.
- The disk where the data directory is located is faulty.
Procedure
Check the permission on the faulty data directory.
- Find the host information in the alarm information and log in to the host.
- In the alarm information, check whether the data directory and its subdirectories belong to the omm:wheel group.
- Restore the owner group of the data directory and its subdirectories to omm:wheel.
Check whether the disk where the data directory is located is faulty.
- In the upper-level directory of the data directory, create and delete files as user omm. Check whether data read/write on the disk is normal.
- Replace or repair the disk where the data directory is located to ensure that data read/write on the disk is normal.
- On the FusionInsight Manager home page, choose Cluster > Services > Kafka > Instance. On the Kafka instance page that is displayed, restart the Broker instance on the host recorded in 2.
During the restart of the Broker instance, if the current Topic is a single copy and is on the current Broker node, the Kafka service will be interrupted. Otherwise, the Kafka service will not be affected.
- After Broker is started, check whether the alarm is cleared.
- If yes, no further action is required.
- If no, go to 8.
Collect fault information.
- On FusionInsight Manager, choose O&M > Log > Download.
- In the Service area, select Kafka in the required cluster.
- Click in the upper right corner, and set Start Date and End Date for log collection to 10 minutes ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact the O&M personnel and send the collected logs.
Alarm Clearing
After the fault is rectified, the system automatically clears this alarm.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot