Configuring an Auto Scaling Rule
In big data application scenarios, especially real-time data analysis and processing, the number of cluster nodes needs to be dynamically increased or decreased according to data volume changes to add or reduce resources. The auto scaling function of MRS enables clusters to be automatically scaled out or in based on cluster loads. In addition, if the data volume changes in a cycle by day and you want to scale out or in a cluster before the data volume changes, you can use the MRS resource plan feature (setting the Task node quantity based on the time range).
- Auto scaling rules: You can increase or decrease Task nodes based on real-time cluster loads. Auto scaling will be triggered when the data volume changes but there may be some delays.
- Resource plan (setting the Task node quantity based on the time range): If the data volume changes periodically, you can create resource plans to resize the cluster before the data volume changes, thereby avoiding delays in increasing or decreasing resources.
You can configure either auto scaling rules or resource plans or both of them to trigger the auto scaling. Configuring both resource plans and auto scaling rules improves the cluster node scalability to cope with occasionally unexpected data volume peaks.
In some service scenarios, resources need to be reallocated or service logic needs to be modified after cluster scale-out or scale-in. If you manually scale out or scale in a cluster, you can log in to cluster nodes to reallocate resources or modify service logic. If you use auto scaling, MRS enables you to customize automation scripts for resource reallocation and service logic modification. Automation scripts can be executed before and after auto scaling and automatically adapt to service load changes, all of which eliminates manual operations. In addition, automation scripts can be fully customized and executed at various moments, which can meet your personalized requirements and improve auto scaling flexibility.
You can configure auto scaling rules when creating a cluster or after a cluster has been created. This section describes how to configure auto scaling rules after cluster creation. For details about how to configure auto scaling rules during cluster creation, see Configuring Auto Scaling Rules When Creating a Cluster.
Background
You can configure either auto scaling rules or resource plans or both of them to trigger the auto scaling.
- Auto scaling rules:
- You can set a maximum of five rules for scaling out or in a cluster, respectively.
- The system determines the scale-out and then scale-in based on your configuration sequence. Important policies take precedence over other policies to prevent repeated triggering when the expected effect cannot be achieved after a scale-out or scale-in.
- Comparison factors include greater than, greater than or equal to, less than, and less than or equal to.
- Cluster scale-out or scale-in can be triggered only after the configured metric threshold is reached for consecutive 5n (the default value of n is 1) minutes.
- After each scale-out or scale-in, there is a cooling duration is greater than 0, and lasts 20 minutes by defaults.
- In each cluster scale-out or scale-in, at least one node and at most 100 nodes can be added or reduced.
- Resource plans (setting the number of Task nodes by time range):
- You can specify a Task node range (minimum number to maximum number) in a time range. If the number of Task nodes is beyond the Task node range in a resource plan, the system triggers cluster scale-out or scale-in.
- You can set a maximum of five resource plans for a cluster.
- A resource plan cycle is by day. The start time and end time can be set to any time point between 00:00 and 23:59. The start time must be at least 30 minutes earlier than the end time. Time ranges configured for different resource plans cannot overlap.
- After a resource plan triggers cluster scale-out or scale-in, there is 10-minute cooling duration. Auto scaling will not be triggered again within the cooling time.
- When a resource plan is enabled, the number of Task nodes in the cluster is limited to the default node range configured by you in other time periods except the time period configured in the resource plan.
- If the resource plan is not enabled, the number of Task nodes is not limited to the default node range.
- Automation scripts:
- You can set an automation script so that it can automatically run on cluster nodes when auto scaling is triggered.
- You can set a maximum number of 10 automation scripts for a cluster.
- You can specify an automation script to be executed on one or more types of nodes.
- Automation scripts can be executed before or after scale-out or scale-in.
- Before using automation scripts, upload them to a cluster VM or OBS file system in the same region as the cluster. The automation scripts uploaded to the cluster VM can be executed only on the existing nodes. If you want to make the automation scripts run on the new nodes, upload them to the OBS file system.
Using Auto Scaling Rules Alone
- Log in to the MRS management console.
- Choose Clusters > Active Clusters, select a running cluster, and click its name to The cluster details page is displayed.
- On the Nodes tab page, click Auto Scaling in the Operation column of the Task node group. The Auto Scaling page is displayed.
If no Task node exists in the cluster, click Configure Task Node to add a Task node and then perform this step.
- Configure an auto scaling rule.
You can configure the auto scaling rule to adjust the number of nodes, which affects the actual price. Therefore, exercise caution when performing this operation.
- Auto Scaling: indicates whether to enable auto scaling. Auto scaling is disabled by default. After you enable it, you can configure the following parameters.
- Node Range
- Default Range: Enter a Task node range, in which auto scaling is performed. This constraint applies to all scale-in and scale-out rules. The value ranges from 0 to 500.
- Configure Node Range for Specific Time Range: This parameter is used to configure an auto scaling resource plan.
- Click Configure Node Range for Specific Time Range under Default Range.
- Configure the Time Range and Node Range parameters. Time Range and Node Range indicate the number of Task nodes within the time range. The value of Node Range ranges from 0 to 500.
You can click Configure Node Range for Specific Time Range to configure multiple resource plans.
- If the node range in a specified period is not configured, the default node range is used.
- If the number of nodes in a specified time range is configured, the node range is used. If the time is not within the configured time range, the default range is used.
- Auto Scaling Rule: To enable the auto scaling, the scale-out and scale-in rules need to be configured.
Configuration procedure:
- Select Scale-out or Scale-in.
- Click Add Rule. The Add Rule page is displayed.
- Configure the Rule Name, If, Last for, Add, and Cooldown Period parameters.
- Click OK.
You can view, edit, or delete the rules you configured in the Scale-out or Scale-in area on the Auto Scaling page. You can click Add Rule to configure multiple rules.
- Select I agree to authorize MRS to scale out or scale in nodes based on the above rule.
- Click OK.
Using Resource Plans Alone
If the data volume changes regularly every day and you want to scale out or in a cluster before the data volume changes, you can create resource plans to adjust the number of Task nodes as planned in the specified time range.
For example, the service data volume for real-time processing peaks between 7:00 and 13:00 every day and is stable and low for other time. Assume that an MRS streaming cluster is used to process the service data. Between 7:00 and 13:00, five Task nodes are required for processing the peak data volume, and only two task nodes are required for other time periods. You can perform the following steps to configure a resource plan.
- Log in to the MRS management console.
- Choose Clusters > Active Clusters, select a running cluster, and click its name to The cluster details page is displayed.
- On the Nodes tab page, click Auto Scaling in the Operation column of the Task node group. The Auto Scaling page is displayed.
If no Task node exists in the cluster, click Configure Task Node to add a Task node and then perform this step.
- Configure a resource plan.
You can configure the resource plan to adjust the number of nodes, which affects the actual price. Therefore, exercise caution when performing this operation.
Configuration procedure:
- On the Auto Scaling page, enable Auto Scaling.
- For example, the Default Range is set to 2-2, indicating that the number of Task nodes is fixed to 2 except the time range specified in the resource plan.
- Click Configure Node Range for Specific Time Range under Default Range.
- Configure the Time Range and Node Range parameters. For example, set Time Range to 07:00-13:00, and Node Range to 5-5. This indicates that the number of Task nodes is fixed to 5 in the time range specified in the resource plan. For details about the parameters, see Table 2.
You can click Configure Node Range for Specific Time Range to configure multiple resource plans.
- (Optional) Configure automation scripts. MRS 3.x does not support this operation.
- Set Advanced Settings to Configure.
- Click Create. The Automation Script page is displayed.
- Set the following parameters: Name, Script Path, Execution Node, Parameter, Executed, and Action upon Failure. For details about the parameters, see Table 3.
- Click OK to save the automation script configurations.
- Select I agree to authorize MRS to scale out or scale in nodes based on the above rule.
- Click OK.
Using Auto Scaling Rules and Resource Plans Together
If the data volume is not stable and the expected fluctuation may occur, the fixed Task node range cannot guarantee that the requirements in some service scenarios are met. In this case, it is necessary to adjust the number of Task nodes based on the real-time loads and resource plans.
For example, even though the service data volume for real-time processing changes regularly from 7:00 to 13:00 every day, it is still unstable. Assume that during 7:00 to 13:00, the number of required Task nodes ranges from 5 to 8, and the number of Task nodes required at other time ranges from 2 to 4. Therefore, you can set an auto scaling rule based on a resource plan. When the data volume exceeds the expected value, the number of Task nodes can be adjusted if resource loads change, without exceeding the node range specified in the resource plan. When a resource plan is triggered, the number of nodes is adjusted within the specified node range with minimum affect. That is, increase nodes to the upper limit and decrease nodes to the lower limit. Perform the following steps to configure both the auto scaling rule and the resource plan:
- Log in to the MRS management console.
- Choose Clusters > Active Clusters, select a running cluster, and click its name to The cluster details page is displayed.
- On the Nodes tab page, click Auto Scaling in the Operation column of the Task node group. The Auto Scaling page is displayed.
- Configure an auto scaling rule.
You can configure the auto scaling rule to adjust the number of nodes, which affects the actual price. Therefore, exercise caution when performing this operation.
- Auto Scaling: indicates whether to enable auto scaling. Auto scaling is disabled by default. After you enable it, you can configure the following parameters.
- Default Range: Enter a Task node range, in which auto scaling is performed. This constraint applies to all scale-in and scale-out rules. Set this parameter to 2 to 4.
- Auto Scaling Rule: To enable the auto scaling, the scale-out and scale-in rules need to be configured.
Configuration procedure:
- Select Scale-out or Scale-in.
- Click Add Rule. The Add Rule page is displayed.
- Configure the Rule Name, If, Last for, Add, and Cooldown Period parameters.
- Click OK.
You can view, edit, or delete the rules you configured in the Scale-out or Scale-in area on the Auto Scaling page.
- Configure a resource plan.
You can configure the resource plan to adjust the number of nodes, which affects the actual price. Therefore, exercise caution when performing this operation.
Configuration procedure:
- Click Configure Node Range for Specific Time Range under Default Range.
- Configure the Time Range and Node Range parameters. Set Time Range to 07:00-13:00 and Node Range to 5-8. For details about the parameters, see Table 2.
- You can click Configure Node Range for Specific Time Range to configure multiple resource plans.
- Configure the automation script. MRS 3.x does not support this operation.
- In Automation Script in the Advanced Settings, click Create. The Automation Script page is displayed.
- Set the following parameters: Name, Script Path, Execution Node, Parameter, Executed, and Action upon Failure. For details about the parameters, see Table 3.
- Click OK to save the automation script configurations.
- Select I agree to authorize MRS to scale out or scale in nodes based on the above rule.
- Click OK.
Related Information
Cluster Type |
Metric |
Value Type |
Description |
---|---|---|---|
Streaming cluster |
StormSlotAvailable |
Integer |
Number of available Storm slots Value range: 0 to 2147483646 |
StormSlotAvailablePercentage |
Percentage |
Percentage of available Storm slots, that is, the proportion of the available slots to total slots Value range: 0 to 100 |
|
StormSlotUsed |
Integer |
Number of the used Storm slots Value range: 0 to 2147483646 |
|
StormSlotUsedPercentage |
Percentage |
Percentage of the used Storm slots, that is, the proportion of the used slots to total slots Value range: 0 to 100 |
|
StormSupervisorMemAverageUsage |
Integer |
Average memory usage of the Supervisor process of Storm Value range: 0 to 2147483646 |
|
StormSupervisorMemAverageUsagePercentage |
Percentage |
Average percentage of the used memory of the Supervisor process of Storm to the total memory of the system Value range: 0 to 100 |
|
StormSupervisorCPUAverageUsagePercentage |
Percentage |
Average percentage of the used CPUs of the Supervisor process of Storm to the total CPUs Value range: 0 to 6000 |
|
Analysis cluster |
YARNAppPending |
Integer |
Number of pending tasks on YARN Value range: 0 to 2147483646 |
YARNAppPendingRatio |
Ratio |
Ratio of pending tasks on Yarn, that is, the ratio of pending tasks to running tasks on Yarn Value range: 0 to 2147483646 |
|
YARNAppRunning |
Integer |
Number of running tasks on Yarn Value range: 0 to 2147483646 |
|
YARNContainerAllocated |
Integer |
Number of containers allocated to Yarn Value range: 0 to 2147483646 |
|
YARNContainerPending |
Integer |
Number of pending containers on Yarn Value range: 0 to 2147483646 |
|
YARNContainerPendingRatio |
Ratio |
Ratio of pending containers on Yarn, that is, the ratio of pending containers to running containers on Yarn. Value range: 0 to 2147483646 |
|
YARNCPUAllocated |
Integer |
Number of virtual CPUs (vCPUs) allocated to Yarn Value range: 0 to 2147483646 |
|
YARNCPUAvailable |
Integer |
Number of available vCPUs on Yarn Value range: 0 to 2147483646 |
|
YARNCPUAvailablePercentage |
Percentage |
Percentage of available vCPUs on Yarn, that is, the proportion of available vCPUs to total vCPUs Value range: 0 to 100 |
|
YARNCPUPending |
Integer |
Number of pending vCPUs on Yarn Value range: 0 to 2147483646 |
|
YARNMemoryAllocated |
Integer |
Memory allocated to Yarn. The unit is MB. Value range: 0 to 2147483646 |
|
YARNMemoryAvailable |
Integer |
Available memory on Yarn. The unit is MB. Value range: 0 to 2147483646 |
|
YARNMemoryAvailablePercentage |
Percentage |
Percentage of available memory on Yarn, that is, the proportion of available memory to total memory on Yarn Value range: 0 to 100 |
|
YARNMemoryPending |
Integer |
Pending memory on Yarn Value range: 0 to 2147483646 |
- When the value type is percentage or ratio in Table 1, the valid value can be accurate to percentile. The percentage metric value is a decimal value with a percent sign (%) removed. For example, 16.80 represents 16.80%.
- Hybrid clusters support all metrics of analysis and streaming clusters.
Configuration Item |
Description |
---|---|
Time Range |
Start time and End time of a resource plan are accurate to minutes, with the value ranging from 00:00 to 23:59. For example, if a resource plan starts at 8:00 and ends at 10:00, set this parameter to 8:00-10:00. The end time must be at least 30 minutes later than the start time. |
Node Range |
The number of nodes in a resource plan ranges from 0 to 500. In the time range specified in the resource plan, if the number of Task nodes is less than the specified minimum number of nodes, it will be increased to the specified minimum value of the node range at a time. If the number of Task nodes is greater than the maximum number of nodes specified in the resource plan, the auto scaling function reduces the number of Task nodes to the maximum value of the node range at a time. The minimum number of nodes must be less than or equal to the maximum number of nodes. |
- When a resource plan is enabled, the Default Range value on the auto scaling page forcibly takes effect beyond the time range specified in the resource plan. For example, if Default Range is set to 1-2, Time Range is between 08:00-10:00, and Node Range is 4-5 in a resource plan, the number of Task nodes in other periods (0:00-8:00 and 10:00-23:59) of a day is forcibly limited to the default node range (1 to 2). If the number of nodes is greater than 2, auto scale-in is triggered; if the number of nodes is less than 1, auto scale-out is triggered.
- When a resource plan is not enabled, the Default Range takes effect in all time ranges. If the number of nodes is not within the default node range, the number of Task nodes is automatically increased or decreased to the default node range.
- Time ranges of resource plans cannot be overlapped. The overlapped time range indicates that two effective resource plans exist at a time point. For example, if resource plan 1 takes effect from 08:00 to 10:00 and resource plan 2 takes effect from 09:00 to 11:00, the time range between 09:00 to 10:00 is overlapped.
- The time range of a resource plan must be on the same day. For example, if you want to configure a resource plan from 23:00 to 01:00 in the next day, configure two resource plans whose time ranges are 23:00-00:00 and 00:00-01:00, respectively.
Configuration Item |
Description |
---|---|
Name |
Automation script name. The value can contain only digits, letters, spaces, hyphens (-), and underscores (_) and must not start with a space. The value can contain 1 to 64 characters.
NOTE:
A name must be unique in the same cluster. You can set the same name for different clusters. |
Script Path |
Script path. The value can be an OBS file system path or a local VM path.
|
Execution Node |
Select a type of the node where an automation script is executed.
NOTE:
|
Parameter |
Automation script parameter. The following predefined variables can be imported to obtain auto scaling information:
|
Executed |
Time for executing an automation script. The following four options are supported: Before scale-out, After scale-out, Before scale-in, and After scale-in.
NOTE:
Assume that the execution nodes include Task nodes.
|
Action upon Failure |
Whether to continue to execute subsequent scripts and scale-out/in after the script fails to be executed.
NOTE:
|
The automation script is triggered only during auto scaling. It is not triggered when the cluster node is manually scaled out or in.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot