What Can I Do If the Heap Memory of an EsNode Instance Overflows During the Running of Elasticsearch?
Symptom
EsNode1 of a node cannot be accessed. The data writing and query operations fail to be performed on the instance.
Cause Analysis
- "OutOfMemoryError" is reported in the error log. After analysis of the dump heap memory, the length of the string array of more than 70 threads is greater than 99840, and each thread occupies more than 285 MB memory. Further analysis shows that all the data is imported at a time, and the data occupies more than 19 GB memory. The memory usage for caching the data reaches 79%, and a large number of tasks are submitted concurrently. As a result, the instance memory overflows.
- Based on the confirmation with the ISV, the best data import performance is achieved if the imported data volume (about 500 thousand data records) of each bulk ranges from 5 MB to 15 MB. According to the ISV, each data record is about 2 KB, and the recommended number of data record is 2000 for each bulk.
Solution
Restart the EsNode1 instance. Modify the logic to change the number of data records for each bulk is 2000 as the ISV to reduce the memory consumption. In this way, the OOM problem is resolved.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot