Help Center/
Cloud Search Service/
Troubleshooting/
Data Import and Export/
"Could not write all entries" Is Reported When I Use ES-Hadoop to Import Data
Updated on 2024-08-27 GMT+08:00
"Could not write all entries" Is Reported When I Use ES-Hadoop to Import Data
Issue Analysis
The bulk thread pool of the Elasticsearch background supports a maximum of 200 requests. Excess requests will be rejected.
Solution
- Set a proper number of the concurrent write requests from the client as required. ES-Hadoop has a retry mechanism for the rejected HTTP requests. You can modify the following parameters:
- es.batch.write.retry.count: The default value for retry times is 3.
- es.batch.write.retry.wait: The waiting time for each attempt is 10s.
- If you do not require real-time query, you can adjust the shard refresh time (once per second by default) to improve the write speed.
PUT /my_logs { "settings": { "refresh_interval": "30s" } }
Parent topic: Data Import and Export
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot