Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Configuring the Number of HetuEngine Worker Nodes

Updated on 2024-10-25 GMT+08:00

Scenario for Configuring the Number of HetuEngine Worker Nodes

On the HetuEngine web UI, you can adjust the number of worker nodes for a compute instance so that worker nodes can be added when they are insufficient and reduced when they are idle. The number of worker nodes can be adjusted manually or automatically.

  • When an instance is being scaled in or out, the original services are not affected and the instance can still be used.
  • Dynamic instance scale-in/out is delayed to implement smooth adjustment of resource consumption within a long period of time. It cannot respond to the requirements of running SQL tasks for available resources in real time.
  • After instances are dynamically scaled in or out, the number of worker nodes displayed in the instance configuration area on the HSConsole page remains the initial value and does not change with dynamic scaling.
  • The dynamic instance scale-in/out function will be affected if the HSBroker and Yarn services are restarted after the function is enabled. Disable the function before you restart the services.
  • Before scaling out a compute instance, ensure that the current queue has sufficient resources. Otherwise, the scale-out cannot reach the expected result and subsequent scale-in operations will be affected.
  • You can set the timeout period for manual instance scale in/out. For this, log in to Manager, choose HetuEngine > Configurations > All Configurations, search for application.customized.properties, and add the yarn.hetuserver.engine.flex.timeout.sec parameter. The default value is 300 (in seconds).
  • You can set whether to wait for tasks when YARN resources are insufficient during manual scale-out.

    On FusionInsight Manager, choose HetuEngine > Configurations > All Configurations, search for application.customized.properties, and add the yarn.hetuserver.engine.worker.scale.out.resource.limit parameter, the options are as follows:

    • true (default value): The system checks the YARN resources usage. If the resources are sufficient, they are used for scale-out. If the resources are insufficient, they cannot be used for scale-out.
    • false: Scale-out tasks are delivered to YARN without calculating whether there are sufficient Yarn resources. If the resources are sufficient, they are used for scale-out. If the resources are insufficient, the scale-out task waits in a queue.

Procedure for Configuring the Number of HetuEngine Worker Nodes

  1. Log in to FusionInsight Manager as a user who can access the HetuEngine web UI and choose Cluster > Services > HetuEngine. The HetuEngine service page is displayed.
  2. In the Basic Information area on the Dashboard tab page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
  3. In the Compute Instance page, locate the row that contains the tenant to which the target instance belongs and click Configure in the Operation column.

    • If manual scaling is required, change the value of Quantity in the Worker Container Resource Configuration area on the configuration page and click OK. The compute instance status changes to SCALING OUT or SCALING IN. After the scaling is complete, the compute instance status changes to RUNNING.
    • If automatic scaling is required, set Scaling in Advanced Configuration to Yes and configure the following parameters according to Table 1 to enable dynamic scaling.
      NOTE:

      Compute instances in the Running state are scaled in or out based on the configured auto scaling parameters. For compute instances in other states, only the configuration is saved, and the saved configuration takes effect when the compute instances are restarted.

      Table 1 Dynamic scaling parameters

      Parameter

      Description

      Example Value

      Load Collection Period

      The interval for collecting instance load information, in seconds.

      10

      Scale-out Threshold

      When the average value of the instance resource usage in the scale-in/out decision-making period exceeds the threshold, the instance starts to scale out.

      0.9

      Scale-out Size

      The number of worker nodes to be added each time when the instance starts to scale out.

      1

      Scale-out Decision Period

      The interval for determining whether to scale out an instance, in seconds.

      200

      Scale-out Timeout Period

      The timeout period of the scale-out operation, in seconds.

      400

      Scale-in Threshold

      When the average value of the instance resource usage in the scale-in/out decision-making period exceeds the threshold, the instance starts to scale in.

      0.1

      Scale-in Size

      The number of worker nodes to be reduced each time when the instance starts to scale in.

      1

      Scale-in Decision Period

      The interval for determining whether to scale in an instance, in seconds.

      300

      Scale-in Timeout Period

      The timeout period of the scale-in operation, in seconds.

      600

  4. After the configuration, click OK.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback