Configuring Query Fault Tolerance Execution
Scenario
When a node in the cluster is faulty due to network, hardware, or software problems, all query tasks running on the node are lost. This seriously affects cluster productivity and wastes resources, especially for queries running for a long time. HetuEngine provides a fault recovery mechanism, that is, the fault tolerance execution capability. The cluster can reduce the probability of query failure by automatically re-running affected queries or their component tasks. This reduces manual intervention and improves fault tolerance, but prolongs the total execution time.
Currently, the following fault tolerance execution mechanisms are supported:
- Query-level retry policy: If query-level fault tolerance is enabled, intermediate data will not be flushed to disks. If a query job fails, all tasks of the query job will be automatically retried. This policy is recommended when most of the cluster's workloads are small queries.
- Task-level retry policy: If task-level fault tolerance is enabled, HDFS is configured as the swap area by default to flush exchange intermediate data to disks. If a query job fails, the failed tasks are retried. You are advised to use this policy when performing a large number of queries. In this way, the cluster can efficiently retry small-granularity tasks in the query instead of the entire query.
This example describes how to set the fault tolerance execution mechanism of the task-level retry policy.
Notes
- Fault tolerance does not apply to corrupted queries or other user error scenarios. For example, resources are not spent retrying query tasks that fail because SQL statements cannot be parsed.
- Different data sources have different fault tolerance capabilities for SQL statements.
- All data sources support fault-tolerant execution of read operations.
- Hive data sources are fault-tolerant of write operations.
- This tolerance function is good for large-scale queries. If you run a large number of short small queries on a fault-tolerant cluster at the same time, a latency may occur. Therefore, it is recommended that you use dedicated fault-tolerant compute instances when processing batch operations, which are isolated from compute instances with higher query volume for interactive queries.
Procedure
- Log in to FusionInsight Manager as a user who can access the HetuEngine web UI and choose Cluster > Services > HetuEngine. The HetuEngine service page is displayed.
- In the Basic Information area on the Dashboard tab page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
- On the Compute Instance tab, locate the row containing the tenant to which the desired instance belongs and click Configure in the Operation column.
- In the Custom Configuration area, click Add to add the following parameters:
Table 1 Fault tolerance execution parameters Parameter
Example Value
Configuration File
Description
retry-policy
TASK
- coordinator.config.properties
- worker.config.properties
- Retry policy for fault tolerance execution.
- Value range: QUERY and TASK
task-retry-attempts-per-task
4
- coordinator.config.properties
- worker.config.properties
- Maximum number of attempts to retry a single task before a query failure is declared when task fault tolerance is enabled.
- Default value: 4
query-retry-attempts
4
- coordinator.config.properties
- worker.config.properties
- Maximum number of attempts to retry a single query before a query failure is declared when query fault tolerance is enabled.
- Default value: 4
fault-tolerant-execution-task-memory
5GB
- coordinator.config.properties
- worker.config.properties
- This parameter is available when retry-policy is set to TASK. If this parameter is not set, the default value 5 GB is used. The node allocates tasks based on the available memory and estimated memory usage.
- This parameter is used to estimate the memory required for initial task allocation. A larger value indicates that each task uses more memory but the cluster concurrency capability decreases. You can dynamically adjust the value based on service requirements.
- Set Start Now to Yes and click OK.
- After task-level fault tolerance is enabled, intermediate data is generated and cached in the file system. A large number of concurrent queries cause great disk pressure on the file system. By default, HetuEngine can buffer intermediate data to the temporary directory in HDFS. When OBS is connected in the scenario where storage and compute are decoupled, task-level fault tolerance is supported, but intermediate data is still flushed to the disk of the HDFS temporary directory.
- By default, the cluster clears buffer files when the query is complete, and checks and clears residual buffer files that have expired for one day every hour. You can perform the following operations to disable the periodic clearing function:
Log in to FusionInsight Manager, choose Cluster > Services > HetuEngine, click Configurations then All Configurations, click HSBroker(Role), select Fault-tolerance execution, set fte.exchange.clean.task.enabled to false, and save the configuration. Click Instance, select all HSBroker instances, click More, select Restart Instance, and restart the instances as prompted for the configuration to take effect.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot