Updated on 2024-11-12 GMT+08:00

Configuring a Workload Upgrade Policy

In actual applications, upgrade is a common operation. A Deployment, StatefulSet, or DaemonSet can easily support application upgrade.

Configuring the Workload Upgrade Policy on the Console

  1. When creating a workload, click Expand.
  2. Configure the workload upgrade policy based on Table 1.

    Table 1 Parameters for configuring the workload upgrade policy

    Parameter

    Description

    Upgrade Mode

    You can set different upgrade policies:

    • Rolling upgrade: New pods are created gradually and then old pods are deleted. This is the default policy.
    • Replace upgrade: The current pods are deleted and then new pods are created.

    Max. Unavailable Pods (maxUnavailable)

    Specifies the maximum number of pods that can be unavailable during the upgrade process. The default value is 25%. For example, if spec.replicas is set to 4, at least 3 pods exist during the upgrade process. The deletion step is 1. The value can also be set to an absolute number.

    This parameter is only available for Deployments.

    Max. Surge (maxSurge)

    Specifies the maximum number of pods that can exist over spec.replicas. The default value is 25%. For example, if spec.replicas is set to 4, no more than 5 pods can exist during the upgrade process, that is, the upgrade step is 1. The absolute number is calculated from the percentage by rounding up. The value can also be set to an absolute number.

    This parameter is only available for Deployments.

    Min. Ready Seconds (minReadySeconds)

    A pod is considered available only when the minimum readiness time is exceeded without any of its containers crashing. The default value is 0 (the pod is considered available immediately after it is ready).

    Revision History Limit (revisionHistoryLimit)

    Specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. The configuration of each Deployment revision is stored in its ReplicaSets. Therefore, once the old ReplicaSet is deleted, you lose the ability to roll back to that revision of the Deployment. By default, 10 old ReplicaSets will be kept, but the ideal value depends on the frequency and stability of the new Deployments.

    Max. Upgrade Duration (progressDeadlineSeconds)

    Specifies the number of seconds that the system waits for a Deployment to make progress before reporting a Deployment progress failure. It is surfaced as a condition with Type=Progressing, Status=False, and Reason=ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition.

    If this parameter is specified, the value of this parameter must be greater than that of .spec.minReadySeconds.

    Scale-In Time Window (terminationGracePeriodSeconds)

    Graceful deletion time. The default value is 30 seconds. When a pod is deleted, a SIGTERM signal is sent and the system waits for the applications in the container to terminate. If the application is not terminated within the time specified by terminationGracePeriodSeconds, a SIGKILL signal is sent to forcibly terminate the pod.

    Figure 1 Upgrade policies

Rolling Back the Workload Version on the Console

Rollback is to roll an application back to the source version when a fault occurs during the upgrade. A Deployment can be easily rolled back to the source version.

  1. On the cluster details page, choose Workloads and click the name of the workload to be rolled back.
  2. Click the Change History tab, locate the target version, click Roll Back to This Version, and click OK. Wait until the workload version is rolled back.

Configuring the Workload Upgrade Policy Using the CLI

The Deployment can be upgraded in a declarative mode. That is, you only need to modify the YAML definition of the Deployment. For example, you can run the kubectl edit command to change the Deployment image to nginx:alpine. After the modification, query the ReplicaSet and pod. The query result shows that a new ReplicaSet is created and the pod is re-created.

$ kubectl edit deploy nginx

$ kubectl get rs
NAME               DESIRED   CURRENT   READY     AGE
nginx-6f9f58dffd   2         2         2         1m
nginx-7f98958cdf   0         0         0         48m

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-6f9f58dffd-tdmqk   1/1       Running   0          1m
nginx-6f9f58dffd-tesqr   1/1       Running   0          1m

The Deployment can use the maxSurge and maxUnavailable parameters to control the proportion of pods to be re-created during the upgrade, which is useful in many scenarios. The configuration is as follows:

spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate

In the preceding example, the value of spec.replicas is 2. If both maxSurge and maxUnavailable are the default value 25%, maxSurge allows a maximum of three pods to exist (2 × 1.25 = 2.5, rounded up to 3), and maxUnavailable does not allow a maximum of two pods to be unavailable (2 × 0.75 = 1.5, rounded up to 2). During the upgrade process, there will always be two pods running. Each time a new pod is created, an old pod is deleted, until all pods are new.

Rolling Back the Workload Version Using the CLI

For example, if the upgraded image is faulty, you can run the kubectl rollout undo command to roll back the Deployment.

$ kubectl rollout undo deployment nginx
deployment.apps/nginx rolled back

A Deployment can be easily rolled back because it uses a ReplicaSet to control a pod. After the upgrade, the previous ReplicaSet still exists. The Deployment is rolled back by using the previous ReplicaSet to re-create the pod. The number of ReplicaSets stored in a Deployment can be restricted by the revisionHistoryLimit parameter. The default value is 10.