An Error Occurs When the Split Size Is Changed in a Spark Application
Issue
An error occurs when the split size is changed in a Spark application.
Symptom
A user needs to implement multiple mappers by changing the maximum split size to make the Spark application run faster. However, an error occurs when the user runs the set $Parameter command to modify the Hive configuration.
0: jdbc:hive2://192.168.1.18:21066/> set mapred.max.split.size=1000000; Error: Error while processing statement: Cannot nodify mapred.max.split.size at runtime. It is not in list of params that are allowed to be modified at runtime( state=42000,code=1)
Cause Analysis
- Before the hive.security.whitelist.switch parameter is set to enable or disable the whitelist in security mode, the allowed parameters must have been configured in hive.security.authorization.sqlstd.confwhitelist.
- The default whitelist does not contain the mapred.max.split.size parameter. Therefore, the system displays a message indicating that the maximum split size cannot be changed.
Procedure
- Log in to FusionInsight Manager and choose Services > Hive > Configurations > All Configurations.
- Search for hive.security.authorization.sqlstd.confwhitelist.append, and add mapred.max.split.size to hive.security.authorization.sqlstd.confwhitelist.append.
- Save the configuration and restart the Hive component.
- Run the set mapred.max.split.size=1000000; command. If no error occurs, the modification is successful.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot