Updated on 2024-12-18 GMT+08:00

Creating a LoadBalancer Service

Scenario

LoadBalancer Services can access workloads from the public network through ELB, which is more reliable than EIP-based access. The LoadBalancer access address is in the format of IP address of public network load balancer:Access port, for example, 10.117.117.117:80.

If dedicated load balancers are deployed for CCE Autopilot clusters, passthrough networking is supported to reduce the network latency and ensure zero performance loss.

External access requests are directly forwarded from a load balancer to pods. Internal access requests can be forwarded to a pod through a Service.

Figure 1 Passthrough networking

Constraints

  • Automatically created load balancers should not be used by other resources. If they are used by other resources, they cannot be deleted completely.
  • Dedicated load balancers with private IP addresses bound and used for network load balancing (load balancing over TCP or UDP) should be selected. If a Service needs to support HTTP, dedicated load balancers must also support application load balancing (load balancing over HTTP or HTTPS).

Creating a LoadBalancer Service

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane on the left, choose Services & Ingresses. In the upper right corner, click Create Service.
  3. Configure parameters.

    • Service Name: Specify a Service name, which can be the same as the workload name.
    • Service Type: Select LoadBalancer.
    • Namespace: Select the namespace that the workload belongs to.
    • Service Affinity

      Cluster level: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained.

    • Selector: Add a label and click Confirm. A Service selects a pod based on the added label. You can also click Reference Workload Label to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click OK.
    • Load Balancer: Select the load balancer type and whether to use an existing load balancer or create a new one.

      Select Dedicated. A dedicated load balancer supports Network (TCP/UDP), Application (HTTP/HTTPS), or Network (TCP/UDP) & Application (HTTP/HTTPS).

      Select either Use existing or Auto create. For more information, see Table 1.
      Table 1 Load balancer configurations

      Option

      Description

      Use existing

      Only the load balancers in the same VPC as the cluster can be selected. If no load balancer is available, click Create Load Balancer to create one on the ELB console.

      Auto create

      • Instance Name: Enter a load balancer name.
      • Enterprise Project: This parameter is only available for enterprise users who have enabled an enterprise project. Enterprise projects facilitate project-level management and grouping of cloud resources and users.
      • AZ: This parameter is only available for dedicated load balancers. You can create load balancers in multiple AZs to improve service availability. If diaster recovery is required, you are advised to select multiple AZs.
      • Frontend Subnet: This parameter is used to allocate IP addresses to load balancers to receive traffic from clients.
      • Backend Subnet: This parameter is used to allocate IP addresses for load balancers to routing traffic to pods.
      • Network/Application-oriented Specifications
        • Elastic: applies to fluctuating traffic, billed based on the total traffic.
        • Fixed: applies to stable traffic, billed based on specifications.
      • EIP: If you select Auto create, you can select a bandwidth billing option and set the bandwidth.
      • Resource Tag: You can add resource tags to classify resources. You can create predefined tags on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency. This is supported by clusters of v1.27.5-r0, v1.28.3-r0, and later versions.

      You can click Edit in the Set ELB area to set the load balancing algorithm and sticky session.

      • Algorithm: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash.
        • Weighted round robin: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services.
        • Weighted least connections: In addition to the weight assigned to each server, the number of connections processed by each backend server is considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on least connections, the weighted least connections algorithm assigns a weight to each server based on their processing capability. This algorithm is often used for persistent connections, such as database connections.
        • Source IP hash: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This enables requests from different clients to be distributed in load balancing mode and ensures that requests from the same client are forwarded to the same server. This algorithm applies to TCP connections without cookies.
      • Type: This option is disabled by default. You can also select Source IP address. In source IP address-based sticky sessions, requests from the same IP address are forwarded to the same backend server.

        When Source IP hash is used for load balancing, sticky sessions are not available.

    • Health Check: Configure health check for the load balancer.
      • Global health check: applies only to ports of the same protocol. You are advised to select Custom health check.
      • Custom health check: applies to ports used by different protocols.
      Table 2 Health check parameters

      Parameter

      Description

      Protocol

      When the protocol is set to TCP, both TCP and HTTP are supported. When the protocol is set to UDP, only UDP is supported.

      Check Path: This parameter is only available for HTTP health check. It specifies the URL for health check. The check path must start with a slash (/) and contain 1 to 80 characters.

      Port

      By default, the service ports are used for health check. You can also specify another port for health check. If a port is specified, a service port named cce-healthz will be added for the Service.

      Container Port: When a dedicated load balancer has an elastic network interface associated, the container port is used for health check. The value ranges from 1 to 65535.

      Check Period (s)

      Specifies the maximum interval between health checks. The value ranges from 1 to 50.

      Timeout (s)

      Specifies the maximum timeout duration for each health check. The value ranges from 1 to 50.

      Max. Retries

      Specifies the maximum number of health check retries. The value ranges from 1 to 10.

    • Port
      • Protocol: Select the protocol used by the Service.
      • Service Port: Specify the port used by the Service. The port number ranges from 1 to 65535.
      • Container Port: Specify the port on which the workload listens. For example, Nginx uses port 80 by default.
      • Frontend Protocol: Set the protocol of the load balancer listener for establishing a connection with the clients. For a dedicated load balancer, to use HTTP/HTTPS, the type of the load balancer must be Application (HTTP/HTTPS).
      • Health Check: If Health Check is set to Custom health check, you can configure health check for ports that come with different protocols. For details, see Table 2.

      When a LoadBalancer Service is created, a random node port number (NodePort) is automatically generated.

    • Annotation: A LoadBalancer Service has some advanced features, which are implemented by annotations. For details, see Using Annotations to Configure Load Balancing.

  4. Click OK.

Using kubectl to Create a Service (Using an Existing Load Balancer)

You can set the Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl.

  1. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create the nginx-deployment.yaml and nginx-elb-svc.yaml files and edit them.

    The file names are user-defined. nginx-deployment.yaml and nginx-elb-svc.yaml are merely example file names.

    vi nginx-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx 
            name: nginx
          imagePullSecrets:
          - name: default-secret

    vi nginx-elb-svc.yaml

    apiVersion: v1 
    kind: Service 
    metadata: 
      name: nginx
      annotations:
        kubernetes.io/elb.id: <your_elb_id>                         # ELB ID. Replace it with the actual value.
        kubernetes.io/elb.class: performance                   # Load balancer type
        kubernetes.io/elb.lb-algorithm: ROUND_ROBIN                   # Load balancer algorithm
        kubernetes.io/elb.session-affinity-mode: SOURCE_IP          # The sticky session type is source IP address.
        kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}'     # Stickiness duration (min)
        kubernetes.io/elb.health-check-flag: 'on'                   # Enable the ELB health check function.
        kubernetes.io/elb.health-check-option: '{
          "protocol":"TCP",
          "delay":"5",
          "timeout":"10",
          "max_retries":"3"
        }'
    spec:
      selector: 
         app: nginx
      ports: 
      - name: service0 
        port: 80     # Port for accessing the Service, which is also the listener port on the load balancer.
        protocol: TCP 
        targetPort: 80  # Port used by a Service to access the target container. This port is closely related to the applications running in a container.
        nodePort: 31128  # Port number on the node. If this parameter is not specified, a random port number ranging from 30000 to 32767 is generated.
      type: LoadBalancer

    This example uses annotations to implement some advanced features of load balancing, such as sticky sessions and health check. For details, see Table 3.

    For more annotations and examples related to advanced features, see Using Annotations to Configure Load Balancing.

    Table 3 annotations parameters

    Parameter

    Mandatory

    Type

    Description

    kubernetes.io/elb.id

    Yes

    String

    ID of an enhanced load balancer.

    Mandatory when an existing load balancer is to be associated.

    How to obtain:

    On the management console, click Service List, and choose Networking > Elastic Load Balance. Click the name of the load balancer. On the Summary tab, find and copy the ID.

    kubernetes.io/elb.class

    Yes

    String

    The value can be:

    • performance: dedicated load balancer
    NOTE:

    If a LoadBalancer Service accesses an existing dedicated load balancer, the dedicated load balancer must support TCP/UDP networking.

    kubernetes.io/elb.lb-algorithm

    No

    String

    Specifies the load balancing algorithm of the backend server group. The default value is ROUND_ROBIN.

    Options:

    • ROUND_ROBIN: weighted round robin algorithm
    • LEAST_CONNECTIONS: weighted least connections algorithm
    • SOURCE_IP: source IP hash algorithm
    NOTE:

    If this parameter is set to SOURCE_IP, the weight setting (weight field) of backend servers bound to the backend server group is invalid, and sticky session cannot be enabled.

    kubernetes.io/elb.session-affinity-mode

    No

    String

    In source IP address-based sticky sessions, requests from the same IP address are forwarded to the same backend server.

    • Disabling sticky session: Do not configure this parameter.
    • Enabling sticky session: Set this parameter to SOURCE_IP, indicating that the sticky session is based on the source IP address.
    NOTE:

    When kubernetes.io/elb.lb-algorithm is set to SOURCE_IP (source IP hash), sticky session cannot be enabled.

    kubernetes.io/elb.session-affinity-option

    No

    Table 4 Object

    Sticky session timeout.

    kubernetes.io/elb.health-check-flag

    No

    String

    Whether to enable the ELB health check.

    • Enabling health check: Leave blank this parameter or set it to on.
    • Disabling health check: Set this parameter to off.

    If this option is enabled, the kubernetes.io/elb.health-check-option field must also be specified at the same time.

    kubernetes.io/elb.health-check-option

    No

    Table 5 Object

    ELB health check configuration items.

    Table 4 elb.session-affinity-option data structure

    Parameter

    Mandatory

    Type

    Description

    persistence_timeout

    Yes

    String

    Sticky session timeout, in minutes. This parameter is valid only when elb.session-affinity-mode is set to SOURCE_IP.

    Value range: 1 to 60. Default value: 60

    Table 5 elb.health-check-option data structure

    Parameter

    Mandatory

    Type

    Description

    delay

    No

    String

    Health check interval (s)

    Value range: 1 to 50. Default value: 5

    timeout

    No

    String

    Health check timeout, in seconds.

    Value range: 1 to 50. Default value: 10

    max_retries

    No

    String

    Maximum number of health check retries.

    Value range: 1 to 10. Default value: 3

    protocol

    No

    String

    Health check protocol.

    Value options: TCP or HTTP

    path

    No

    String

    Health check URL. This parameter needs to be configured when the protocol is HTTP.

    Default value: /

    Value range: 1-80 characters

  3. Create a workload.

    kubectl create -f nginx-deployment.yaml

    If information similar to the following is displayed, the workload has been created.

    deployment/nginx created

    kubectl get pod

    If information similar to the following is displayed, the workload is running.

    NAME                     READY     STATUS             RESTARTS   AGE
    nginx-2601814895-c1xhw   1/1       Running            0          6s

  4. Create a Service.

    kubectl create -f nginx-elb-svc.yaml

    If information similar to the following is displayed, the Service has been created.

    service/nginx created

    kubectl get svc

    If information similar to the following is displayed, the access type has been set, and the workload is accessible.

    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP      10.247.0.1       <none>        443/TCP        3d
    nginx        LoadBalancer   10.247.130.196   10.78.42.242   80:31540/TCP   51s

  5. Enter the URL in the address box of the browser, for example, 10.78.42.242:80. 10.78.42.242 indicates the IP address of the load balancer, and 80 indicates the access port displayed on the CCE console.

    Nginx is accessible.

    Figure 2 Accessing Nginx through the LoadBalancer Service

Using kubectl to Create a Service (Automatically Creating a Load Balancer)

You can set the Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl.

  1. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create the nginx-deployment.yaml and nginx-elb-svc.yaml files and edit them.

    The file names are user-defined. nginx-deployment.yaml and nginx-elb-svc.yaml are merely example file names.

    vi nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx 
            name: nginx
          imagePullSecrets:
          - name: default-secret

    vi nginx-elb-svc.yaml

    Example of a Service using a dedicated load balancer on a public network:
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
      namespace: default
      annotations:
        kubernetes.io/elb.class: performance
        kubernetes.io/elb.autocreate: '{
          "type": "public",
          "bandwidth_name": "cce-bandwidth-1626694478577",
          "bandwidth_chargemode": "bandwidth",
          "bandwidth_size": 5,
          "bandwidth_sharetype": "PER",
          "eip_type": "5_bgp",
          "vip_subnet_cidr_id": "*****",
          "vip_address": "**.**.**.**",
          "elb_virsubnet_ids": ["*****"],
          "available_zone": [
             ""
          ],
          "l4_flavor_name": "L4_flavor.elb.s1.small"
        }'
        kubernetes.io/elb.enterpriseID: '0'       # ID of the enterprise project to which the load balancer belongs
        kubernetes.io/elb.lb-algorithm: ROUND_ROBIN                   # Load balancer algorithm
        kubernetes.io/elb.session-affinity-mode: SOURCE_IP          # The sticky session type is source IP address.
        kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}'     # Stickiness duration (min)
        kubernetes.io/elb.health-check-flag: 'on'                   # Enable the ELB health check function.
        kubernetes.io/elb.health-check-option: '{
          "protocol":"TCP",
          "delay":"5",
          "timeout":"10",
          "max_retries":"3"
        }'
        kubernetes.io/elb.tags: key1=value1,key2=value2           # ELB resource tags
    spec:
      selector:
        app: nginx
      ports:
      - name: cce-service-0
        targetPort: 80
        nodePort: 0
        port: 80
        protocol: TCP
      type: LoadBalancer

    This example uses annotations to implement some advanced features of load balancing, such as sticky sessions and health check. For details, see Table 6.

    For more annotations and examples related to advanced features, see Using Annotations to Configure Load Balancing.

    Table 6 annotations parameters

    Parameter

    Mandatory

    Type

    Description

    kubernetes.io/elb.class

    Yes

    String

    Select a proper load balancer type.

    The value can be:

    • performance: dedicated load balancer

    kubernetes.io/elb.autocreate

    Yes

    elb.autocreate object

    Whether to automatically create a load balancer for the Service.

    NOTE:

    Automatic creation of a public network load balancer: You need to purchase an EIP for the load balancer to allow access over the public network. The load balancer can also be accessed over a private network.

    Automatic creation of a private network load balancer: You do not need to purchase an EIP for the load balancer. The load balancer can only be accessed over a private network.

    Example

    • If a public network load balancer will be automatically created, set this parameter to the following value:

      '{"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james","available_zone": ["cn-east-3a"],"l4_flavor_name": "L4_flavor.elb.s1.small"}'

    • If a private network load balancer will be automatically created, set this parameter to the following value:

      '{"type":"inner","available_zone": ["cn-east-3a"],"l4_flavor_name": "L4_flavor.elb.s1.small"}'

    kubernetes.io/elb.subnet-id

    -

    String

    ID of the subnet where the cluster is located. The value can contain 1 to 100 characters.

    • Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created.
    • Optional for clusters later than v1.11.7-r0.

    For details about how to obtain the value, see What Is the Difference Between the VPC Subnet API and the OpenStack Neutron Subnet API?

    kubernetes.io/elb.enterpriseID

    No

    String

    This parameter indicates the ID of the enterprise project in which the ELB load balancer will be created.

    If this parameter is not specified or is set to 0, resources will be bound to the default enterprise project.

    How to obtain:

    Log in to the management console and choose Enterprise > Project Management on the top menu bar. In the list displayed, click the name of the target enterprise project and copy the ID on the enterprise project details page.

    kubernetes.io/elb.lb-algorithm

    No

    String

    Specifies the load balancing algorithm of the backend server group. The default value is ROUND_ROBIN.

    Options:

    • ROUND_ROBIN: weighted round robin algorithm
    • LEAST_CONNECTIONS: weighted least connections algorithm
    • SOURCE_IP: source IP hash algorithm
    NOTE:

    If this parameter is set to SOURCE_IP, the weight setting (weight field) of backend servers bound to the backend server group is invalid, and sticky session cannot be enabled.

    kubernetes.io/elb.session-affinity-mode

    No

    String

    In source IP address-based sticky sessions, requests from the same IP address are forwarded to the same backend server.

    • Disabling sticky session: Do not configure this parameter.
    • Enabling sticky session: Set this parameter to SOURCE_IP, indicating that the sticky session is based on the source IP address.
    NOTE:

    When kubernetes.io/elb.lb-algorithm is set to SOURCE_IP (source IP hash), sticky session cannot be enabled.

    kubernetes.io/elb.session-affinity-option

    No

    Table 4 Object

    Sticky session timeout.

    kubernetes.io/elb.health-check-flag

    No

    String

    Whether to enable the ELB health check.

    • Enabling health check: Leave blank this parameter or set it to on.
    • Disabling health check: Set this parameter to off.

    If this option is enabled, the kubernetes.io/elb.health-check-option field must also be specified at the same time.

    kubernetes.io/elb.health-check-option

    No

    Table 5 Object

    ELB health check configuration items.

    kubernetes.io/elb.tags

    No

    String

    Add resource tags to a load balancer. This parameter can be configured only when a load balancer is automatically created.

    A tag is in the format of "key=value". Use commas (,) to separate multiple tags.

    Table 7 elb.autocreate data structure

    Parameter

    Mandatory

    Type

    Description

    name

    No

    String

    Name of the automatically created load balancer.

    The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed.

    Default: cce-lb+service.UID

    type

    No

    String

    Network type of the load balancer.

    • public: public network load balancer
    • inner: private network load balancer

    Default: inner

    bandwidth_name

    Yes for public network load balancers

    String

    Bandwidth name. The default value is cce-bandwidth-******.

    The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed.

    bandwidth_chargemode

    No

    String

    Bandwidthbilling option.

    • bandwidth: billed by bandwidth
    • traffic: billed by traffic

    Default: bandwidth

    bandwidth_size

    Yes for public network load balancers

    Integer

    Bandwidth size. The default value is 1 to 2,000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region.

    The minimum increment for bandwidth adjustment varies depending on the bandwidth range.
    • The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s.
    • The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1,000 Mbit/s.
    • The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1,000 Mbit/s.

    bandwidth_sharetype

    Yes for public network load balancers

    String

    Bandwidth sharing mode.

    • PER: dedicated bandwidth

    eip_type

    Yes for public network load balancers

    String

    EIP type.

    • 5_telcom: China Telecom
    • 5_union: China Unicom
    • 5_bgp: dynamic BGP
    • 5_sbgp: static BGP

    vip_subnet_cidr_id

    No

    String

    Specifies the subnet where a load balancer is located. The subnet must belong to the VPC where the cluster resides.

    If this parameter is not specified, the ELB load balancer and the cluster are in the same subnet.

    vip_address

    No

    String

    Specifies the private IP address of the load balancer. Only IPv4 addresses are supported.

    The IP address must be in the CIDR block of the load balancer. If this parameter is not specified, an IP address will be automatically assigned from the CIDR block of the load balancer.

    available_zone

    Yes

    Array of strings

    AZ where the load balancer is located.

    You can obtain all supported AZs by querying the AZ list.

    This parameter is available only for dedicated load balancers.

    l4_flavor_name

    Yes

    String

    Flavor name of the Layer-4 load balancer.

    You can obtain all supported types by querying the flavor list.

    This parameter is available only for dedicated load balancers.

    l7_flavor_name

    No

    String

    Flavor name of the Layer-7 load balancer.

    You can obtain all supported types by querying the flavor list.

    This parameter is available only for dedicated load balancers. The value of this parameter must be the same as that of l4_flavor_name, that is, both are elastic specifications or fixed specifications.

    elb_virsubnet_ids

    No

    Array of strings

    Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Do not use the subnet CIDR blocks of other resources (such as clusters) as the load balancer CIDR block.

    This parameter is available only for dedicated load balancers.

    Example:

    "elb_virsubnet_ids": [
       "14567f27-8ae4-42b8-ae47-9f847a4690dd"
     ]

  3. Create a workload.

    kubectl create -f nginx-deployment.yaml

    If information similar to the following is displayed, the workload is being created.

    deployment/nginx created

    kubectl get pod

    If information similar to the following is displayed, the workload is running.

    NAME                     READY     STATUS             RESTARTS   AGE
    nginx-2601814895-c1xhw   1/1       Running            0          6s

  4. Create a Service.

    kubectl create -f nginx-elb-svc.yaml

    If information similar to the following is displayed, the Service has been created.

    service/nginx created

    kubectl get svc

    If information similar to the following is displayed, the access type has been set, and the workload is accessible.

    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP      10.247.0.1       <none>        443/TCP        3d
    nginx        LoadBalancer   10.247.130.196   10.78.42.242   80:31540/TCP   51s

  5. Enter the URL in the address box of the browser, for example, 10.78.42.242:80. 10.78.42.242 indicates the IP address of the load balancer, and 80 indicates the access port displayed on the CCE console.

    Nginx is accessible.

    Figure 3 Accessing Nginx through the LoadBalancer Service