Help Center/
MapReduce Service/
FAQs/
Client Usage/
What Should I Do If an Alarm Is Reported Indicating that the Memory Is Insufficient When I Execute a SQL Statement on the ClickHouse Client?
Updated on 2024-08-16 GMT+08:00
What Should I Do If an Alarm Is Reported Indicating that the Memory Is Insufficient When I Execute a SQL Statement on the ClickHouse Client?
Symptom
The ClickHouse client restricts the memory used by GROUP BY statements. When a SQL statement is executed on the ClickHouse client, the following error information is displayed:
Progress: 1.83 billion rows, 85.31 GB (68.80 million rows/s., 3.21 GB/s.) 6%Received exception from server: Code: 241. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 1048576 bytes), maximum: 9.31 GiB: (while reading column hits):
Solution
- Run the following command before executing an SQL statement on condition that the cluster has sufficient memory:
SET max_memory_usage = 128000000000; #128G
- If no sufficient memory is available, ClickHouse enables you to overflow data to disk to free up the memory: You are advised to set the value of max_memory_usage to twice the size of max_bytes_before_external_group_by.
set max_bytes_before_external_group_by=20000000000; #20G set max_memory_usage=40000000000; #40G
- If the customer has a large amount of data and the entire table is queried, you are advised to query the data by partition or upgrade the specifications of the core node in the cluster.
Parent topic: Client Usage
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot