Help Center/
MapReduce Service/
Component Operation Guide (LTS)/
Using CarbonData/
CarbonData Troubleshooting
Updated on 2024-10-09 GMT+08:00
CarbonData Troubleshooting
- Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter
- Query Performance Deteriorated Due to Insufficient Executor Memory
- Data Query or Loading Failed, and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Was Reported
- Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero?
- Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?
Parent topic: Using CarbonData
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot