Help Center> Cloud Container Engine> Best Practices> Networking> Implementing Sticky Session Through Load Balancing

Implementing Sticky Session Through Load Balancing

Concepts

Session persistence is one of the most common while complex problems in load balancing.

Session persistence is also called sticky sessions. After the sticky session function is enabled, requests from the same client are distributed to the same backend ECS by the load balancer for better continuity.

In load balancing and sticky session, connection and session are two key concepts. When only load balancing is concerned, session and connection refer to the same thing.

Simply put, if a user needs to log in, it can be regarded as a session; otherwise, a connection.

The sticky session mechanism fundamentally conflicts with the basic functions of load balancing. A load balancer forwards requests from clients to multiple backend servers to avoid overload on a single server. However, sticky session requires that some requests be forwarded to the same server for processing. Therefore, you need to select a proper sticky session mechanism based on the application environment.

Prerequisites

Layer-4 Load Balancing (Service)

In layer-4 load balancing, source IP address-based sticky session (Hash routing based on the client IP address) can be enabled. To enable source IP address-based sticky session on Services, the following conditions must be met:

  1. Service Affinity of the Service is set to Node level (that is, the value of the externalTrafficPolicy field of the Service is Local).
  2. Enable the source IP address-based sticky session in the load balancing configuration of the Service. For details about console operations and YAML fields, see LoadBalancer.

  3. Anti-affinity is enabled for the backend applications of the Service. For details about how to enable anti-affinity, see Pod Anti-Affinity.

Layer-7 Load Balancing (Ingress)

In layer-7 load balancing, sticky session based on HTTP cookies and app cookies can be enabled. To enable such sticky session, the following conditions must be met:

  1. Anti-affinity is enabled for the applications (workloads) corresponding to the ingress. For details about how to enable anti-affinity, see Pod Anti-Affinity.
  2. Node affinity is enabled for the Service corresponding to the ingress. For details about how to enable node affinity, see Node Affinity.

Procedure

  1. Create a Nginx workload.

    1. Log in to the CCE console. In the navigation pane, choose Workloads > Deployments. Click Create Deployment.
    2. Enter a workload name and set Instances to 3.

    3. Click Next: Add Container. In the dialog box displayed, click Add Container. On the Open Source Images tab page, select the nginx image and then click OK.

    4. Retain the default values of image parameters and click Next: Set Application Access to set the workload access type.

      Click Add Service. In this example, set Access Type to NodePort and Container Port to 80.

    5. Click Next: Configure Advanced Settings, choose Inter-Pod Affinity and Anti-affinity > Anti-affinity with Pods, click Add, and select the current workload. Click OK.

  2. In the navigation pane on the left, choose Resource Management > Network. On the Ingresses tab page, click Create Ingress.
  3. Set the ingress parameters.

    Figure 1 Creating an ingress

    Configure the forwarding policy. In ELB Settings, enable the sticky session function, and click Create.

    Figure 2 ELB Settings

    If the application cookie is selected, the cookie name must be specified.

    After the ingress is created, it is displayed in the ingress list.

    Figure 3 Viewing the created ingress

  4. Choose Service List > Network > Elastic Load Balance. Click the name of the load balancer to access the load balancer details page. On the Backend Server Groups tab page, check whether Sticky Session is Enabled. Ensure that it is enabled.

    Figure 4 Enabling the sticky session feature

  5. Log in to a node (named test in this example) bound with an EIP and run the following commands:

    1. Save the cookie.

      curl -H "Host:www.example.com" http://EIP:80 -c test

    2. Access the ingress.

      curl -H "Host:www.zq.com" http://EIP:80 -b test

    3. View logs.

      kubectl logs podname

      The command output displays only the logs generated after the pod on a node is accessed.

    The kubernetes.io/elb.session-affinity-option: '{"persistence_timeout":"10"}' key-value pair is added to the annotations of the ingress. Therefore, 10 minutes after the modification, if you run the curl -H "Host:www.zq.com" http://EIP:80 -b test command to access the ingress and then run the kubectl logs podname command, the command output will contain the logs generated after at least one pod is accessed.