Configurations for Performance Tuning
Scenario
This section describes the configurations that can improve CarbonData performance.
Procedure
Table 1 and Table 2 describe the configurations about query of CarbonData.
Parameter |
spark.sql.shuffle.partitions |
---|---|
Configuration File |
spark-defaults.conf |
Function |
Data query |
Scenario Description |
Number of tasks started for the shuffle process in Spark |
Tuning |
You are advised to set this parameter to one to two times as much as the executor cores. In an aggregation scenario, reducing the number from 200 to 32 can reduce the query time by two folds. |
Parameter |
spark.executor.cores spark.executor.instances spark.executor.memory |
---|---|
Configuration File |
spark-defaults.conf |
Function |
Data query |
Scenario Description |
Number of executors and vCPUs, and memory size used for CarbonData data query |
Tuning |
In the bank scenario, configuring 4 vCPUs and 15 GB memory for each executor will achieve good performance. The two values do not mean the more the better. Configure the two values properly in case of limited resources. If each node has 32 vCPUs and 64 GB memory in the bank scenario, the memory is not sufficient. If each executor has 4 vCPUs and 12 GB memory, Garbage Collection may occur during query, time spent on query from increases from 3s to more than 15s. In this case, you need to increase the memory or reduce the number of vCPUs. |
Table 3, Table 4, and Table 5 describe the configurations for CarbonData data loading.
Parameter |
carbon.number.of.cores.while.loading |
---|---|
Configuration File |
carbon.properties |
Function |
Data loading |
Scenario Description |
Number of vCPUs used for data processing during data loading in CarbonData |
Tuning |
If there are sufficient CPUs, you can increase the number of vCPUs to improve performance. For example, if the value of this parameter is changed from 2 to 4, the CSV reading performance can be doubled. |
Parameter |
carbon.use.local.dir |
---|---|
Configuration File |
carbon.properties |
Function |
Data loading |
Scenario Description |
Whether to use Yarn local directories for multi-disk data loading |
Tuning |
If this parameter is set to true, CarbonData uses local Yarn directories for multi-table load disk load balance, improving data loading performance. |
Parameter |
carbon.use.multiple.temp.dir |
---|---|
Configuration File |
carbon.properties |
Function |
Data loading |
Scenario Description |
Whether to use multiple temporary directories to store temporary sort files |
Tuning |
If this parameter is set to true, multiple temporary directories are used to store temporary sort files during data loading. This configuration improves data loading performance and prevents single points of failure (SPOFs) on disks. |
Table 6 describes the configurations for CarbonData data loading and query.
Parameter |
carbon.compaction.level.threshold |
---|---|
Configuration File |
carbon.properties |
Function |
Data loading and query |
Scenario Description |
For minor compaction, specifies the number of segments to be merged in stage 1 and number of compacted segments to be merged in stage 2. |
Tuning |
Each CarbonData load will create one segment, if every load is small in size, it will generate many small files over a period of time impacting the query performance. Configuring this parameter will merge the small segments to one big segment which will sort the data and improve the performance. The compaction policy depends on the actual data size and available resources. For example, a bank loads data once a day and at night when no query is performed. If resources are sufficient, the compaction policy can be 6 or 5. |
Parameter |
carbon.indexserver.enable.prepriming |
---|---|
Configuration File |
carbon.properties |
Function |
Data loading |
Scenario Description |
Enabling data pre-loading during the use of the index cache server can improve the performance of the first query. |
Tuning |
You can set this parameter to true to enable the pre-loading function. The default value is false. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.