Logs Cannot Be Written In Due to High CPU Usage of Elasticsearch
Symptom
The CPU usage of an Elasticsearch cluster is high, an error message Elasticsearch Unreachable is reported by Logstash, and logs cannot be written to Elasticsearch.
Possible Causes
The indexes have only one shard. This could easily overload individual nodes. When the job queue is full, later jobs are rejected.
Procedure
- Log in to the CSS management console.
- In the navigation pane on the left, choose Clusters > Elasticsearch.
- In the cluster list, find the target cluster, and choose More > Cerebro in the Operation column. Log in to Cerebro.
- In Cerebro, check the number of shards in the cluster and metrics such as the CPU, load, head, and disk of each node.
- Analyze the possible causes based on metrics and tune your system accordingly.
- Increase the number of queues and reduce rejected jobs by changing the value of write.queue_size.
- Click the name of the target cluster. The cluster information page is displayed.
- Choose Cluster Settings > Parameter Settings.
- Expand Custom, find the write.queue_size parameter and increase its value.
If this parameter does not exist, add it under Custom.
For more information, see Parameter Settings.
- Rebuild the indexes to ensure that the number of shards is greater than that of nodes in the cluster.
- Increase the number of queues and reduce rejected jobs by changing the value of write.queue_size.
- If the number of shards and queues are appropriate but the CPU usage and load are still high, you are advised to scale out the cluster.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot