Optimizing Small Files
Scenario
A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file is a task. In this way, Spark has to start many such tasks. If a shuffle operation is involved in the SQL logic, the number of hash buckets soars, severely hindering system performance.
In case of massive number of small files, when DataSource creates an RDD, it splits small files in the Spark SQL table to PartitionedFiles and then merges the PartitionedFiles to a partition to avoid generating too many hash buckets during the shuffle operation. See Figure 1.
Procedure
If you want to enable small file optimization, configure the spark-defaults.conf file on the Spark client.
Parameter |
Description |
Default Value |
---|---|---|
spark.sql.files.maxPartitionBytes |
The maximum number of bytes that can be packed into a single partition when a file is read. Unit: byte |
134217728 (128 MB) |
spark.files.openCostInBytes |
The estimated cost to open a file, measured by the number of bytes that can be scanned in the same time. This is used when putting multiple files into a partition. It is better to over estimate, then the partitions with small files will be faster than partitions with larger files. |
4 MB |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot