Help Center> MapReduce Service> User Guide (ME-Abu Dhabi Region)> Troubleshooting> Using Spark> An Error Occurs When the Split Size Is Changed in a Spark Application
Updated on 2022-12-08 GMT+08:00

An Error Occurs When the Split Size Is Changed in a Spark Application

Issue

An error occurs when the split size is changed in a Spark application.

Symptom

A user needs to implement multiple mappers by changing the maximum split size to make the Spark application run faster. However, an error occurs when the user runs the set $Parameter command to modify the Hive configuration.

0: jdbc:hive2://192.168.1.18:21066/> set mapred.max.split.size=1000000;
Error: Error while processing statement: Cannot nodify mapred.max.split.size at runtime. It is not in list of params that are allowed to be modified at runtime( state=42000,code=1)

Cause Analysis

  • Before the hive.security.whitelist.switch parameter is set to enable or disable the whitelist in security mode, the allowed parameters must have been configured in hive.security.authorization.sqlstd.confwhitelist.

  • The default whitelist does not contain the mapred.max.split.size parameter. Therefore, the system displays a message indicating that the maximum split size cannot be changed.

Procedure

  1. Search for hive.security.authorization.sqlstd.confwhitelist.append, and add mapred.max.split.size to hive.security.authorization.sqlstd.confwhitelist.append. For details, see Component Operation Guide > Using Hive > Using Hive from Scratch.
  2. Save the configuration and restart the Hive component.
  3. Run the set mapred.max.split.size=1000000; command. If no error occurs, the modification is successful.