ALM-24005 Exception Occurs When Flume Transmits Data
Alarm Description
The alarm module monitors the capacity status of Flume Channel. The alarm is generated immediately when the duration that Channel is fully occupied exceeds the threshold or the number of times that Source fails to send data to Channel exceeds the threshold.
The default threshold is 10. You can change the threshold by modifying the channelfullcount parameter of the related channel in the properties.properties configuration file in the conf directory.
The alarm is cleared when the space of Flume Channel is released and the alarm handling is complete.
Alarm Attributes
Alarm ID |
Alarm Severity |
Auto Cleared |
---|---|---|
24005 |
Major |
Yes |
Alarm Parameters
Parameter |
Description |
---|---|
Source |
Specifies the cluster for which the alarm was generated. |
ServiceName |
Specifies the service for which the alarm was generated. |
HostName |
Specifies the host for which the alarm was generated. |
AgentId |
Specifies the ID of the agent for which the alarm was generated. |
ComponentType |
Specifies the type of the component for which the alarm was generated. |
ComponentName |
Specifies the name of the component for which the alarm was generated. |
Impact on the System
If the disk usage of Flume Channel increases continuously, the time required for importing data to a specified destination prolongs. When the disk usage of Flume Channel reaches 100%, the Flume agent process pauses.
Possible Causes
- Flume Sink is faulty, so the data cannot be sent.
- The network is faulty, so the data cannot be sent.
Handling Procedure
Check whether Flume Sink is faulty.
- Open the properties.properties configuration file on the local PC, search for type = hdfs in the file, and check whether the Flume sink type is HDFS.
- On FusionInsight Manager, check whether HDFS Service Unavailable alarm is generated in the alarm list and whether the HDFS service is stopped in the service list.
- Open the properties.properties configuration file on the local PC, search for type = hbase in the file, and check whether the Flume sink type is HBase.
- On FusionInsight Manager, check whether HBase Service Unavailable alarm is generated in the alarm list and whether the HBase service is stopped in the service list.
- Open the properties.properties configuration file on the local PC, search for org.apache.flume.sink.kafka.KafkaSink in the file, and check whether the Flume sink type is Kafka.
- On FusionInsight Manager, check whether Kafka Service Unavailable alarm is generated in the alarm list and whether the Kafka service is stopped in the service list.
- On FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Flume > Instance.
- Go to the Flume instance page of the faulty node to check whether the indicator Sink Speed Metrics is 0.
Check the network connection between the faulty node and the node that corresponds to the Flume Sink IP address.
- Open the properties.properties configuration file on the local PC, search for type = avro in the file, and check whether the Flume sink type is Avro.
- Log in to the faulty node as user root, and run the ping IP address of the Flume sink command to check whether the peer host can be pinged successfully.
- Contact the network administrator to restore the network.
- In the alarm list, check whether the alarm is cleared after a period.
- If yes, no further action is required.
- If no, go to 13.
Collect the fault information.
- On FusionInsight Manager, choose O&M. In the navigation pane on the left, choose Log > Download.
- Expand the Service drop-down list, and select Flume for the target cluster.
- Click the edit icon in the upper right corner, and set Start Date and End Date for log collection to 1 hour ahead of and after the alarm generation time, respectively. Then, click Download.
- Contact O&M personnel and provide the collected logs.
Alarm Clearance
This alarm is automatically cleared after the fault is rectified.
Related Information
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.