Help Center> Elastic Load Balance> Service Overview> Differences Between Dedicated and Shared Load Balancers
Updated on 2023-09-19 GMT+08:00

Differences Between Dedicated and Shared Load Balancers

Each type of load balancer has their advantages.

Feature Comparison

Dedicated load balancers provide more powerful forwarding performance, while shared load balancers are less expensive. You can select the appropriate load balancer based on your application needs. The following tables compare the features supported by the two types of load balancers. (√ indicates that an item is supported, and x indicates that an item is not supported.)

Table 1 Performance

Item

Dedicated Load Balancers

Shared Load Balancers

Deployment mode

Their performance is not affected by other load balancers. You can select different specifications based on your requirements.

Shared load balancers are deployed in clusters, and all the load balancers share underlying resources, so that the performance of a load balancer is affected by other load balancers.

Concurrent connections

A dedicated load balancer in an AZ can establish up to 20 million concurrent connections. If you deploy a dedicated load balancer in two AZs, the number of concurrent connections will be doubled.

For example, if you deploy a dedicated load balancer in two AZs, it can handle up to 40 million concurrent connections.

NOTE:
  • If requests are from the Internet, the load balancer in each AZ you select routes the requests based on source IP addresses. If you deploy a load balancer in two AZs, the requests the load balancers can handle will be doubled.
  • For requests from a private network:
    • If clients are in an AZ you select when you create the load balancer, requests are distributed by the load balancer in this AZ. If the load balancer is unhealthy, requests are distributed by the load balancer in another AZ you select.

      If the load balancer is healthy but the connections that the load balancer needs to handle exceed the amount defined in the specifications, service may be interrupted. To address this issue, you need upgrade specifications. You can monitor traffic usage on private network by AZ.

    • If clients are in an AZ that is not selected when you create the load balancer, requests are distributed by the load balancer in each AZ you select based on source IP addresses.
  • If requests are from a Direct Connect connection, the load balancer in the same AZ as the Direct Connect connection routes the requests. If the load balancer is unavailable, requests are distributed by the load balancer in another AZ.
  • If clients are in a VPC that is different from where the the load balancer works, the load balancer in the AZ where the original VPC subnet resides routes the requests. If the load balancer is unavailable, requests are distributed by the load balancer in another AZ.

-

Table 2 Supported protocols

Protocol

Description

Dedicated Load Balancers

Shared Load Balancers

QUIC

If you use UDP as the frontend protocol, you can select QUIC as the backend protocol, and select the connection ID algorithm to route requests with the same connection ID to the same backend server.

QUIC has the advantages of low latency, high reliability, and no head-of-line blocking (HOL blocking), and is very suitable for the mobile Internet. No new connections need to be established when you switch between a Wi-Fi network and a mobile network.

x

TCP/UDP (Layer 4)

After receiving TCP or UDP requests from the clients, the load balancer directly routes the requests to backend servers. Load balancing at Layer 4 features high routing efficiency.

HTTP/HTTPS (Layer 7)

After receiving a request, the listener needs to identify the request and forward data based on the fields in the HTTP/HTTPS packet header. Though the routing efficiency is lower than that at Layer 4, load balancing at Layer 7 provides some advanced features such as encrypted transmission and cookie-based sticky sessions.

WebSocket

WebSocket is a new HTML5 protocol that provides full-duplex communication between the browser and the server. WebSocket saves server resources and bandwidth, and enables real-time communication.

Table 3 Supported backend types

Backend Type

Description

Dedicated Load Balancers

Shared Load Balancers

IP as backend servers

You can add servers in a VPC connected using a VPC peering connection, in a VPC connected through a cloud connection, or in an on-premises data center at the other end of a Direct Connect or VPN connection, by using the server IP addresses. In this way, incoming traffic can be flexibly distributed to cloud servers and on-premises servers for hybrid load balancing.

x

Supplementary network interface

You can attach supplementary network interfaces to backend servers.

Supplementary network interfaces are a supplement to elastic network interfaces and are attached to VLAN interfaces of elastic network interfaces used by backend servers when the number of elastic network interfaces attached to the backend servers exceeds the limit. Supplementary network interfaces allow you to attach more network interfaces to a single backend server for flexible and high-availability network configuration.

x

ECS

You can use load balancers to distribute incoming traffic across ECSs.

BMS

You can use load balancers to distribute incoming traffic across BMSs.

Table 4 Advanced features

Feature

Description

Dedicated Load Balancers

Shared Load Balancers

Multiple specifications

Load balancers allow you to select appropriate specifications based on your requirements.

x

HTTPS support

Load balancers can receive HTTPS requests from clients and route them to an HTTPS backend server group.

x

IPv6 addresses

Load balancers can route requests from IPv6 clients. You can change the IPv6 address bound to a load balancer and unbind the IPv6 address from the load balancer.

x

Changing the private IPv4 address bound to the load balancer

You can change the private IPv4 address bound to a load balancer.

x

Slow start

You can enable slow start for HTTP or HTTPS listeners. After you enable it, the load balancer linearly increases the proportion of requests to send to backend servers in this mode. Slow start gives applications time to warm up and respond to requests with optimal performance.

x

Mutual authentication

In this case, you need to deploy both the server certificate and client certificate.

Mutual authentication is supported only by HTTPS listeners.

Custom timeout durations

You can configure and modify timeout durations (idle timeout, request timeout, and response timeout) for your listeners to meet varied demands. For example, if the size of a request from an HTTP or HTTPS client is large, you can increase the request timeout duration to ensure that the request can be successfully routed.

  • Dedicated load balancers: You can change the timeout durations of TCP, UDP, HTTP, and HTTPS listeners.
  • Shared load balancers: You can only change the timeout durations of TCP, HTTP, and HTTPS listeners, but cannot change the timeout durations of UDP listeners.

Security policies

When you add HTTPS listeners, you can select appropriate security policies to improve service security. A security policy is a combination of TLS protocols and cipher suites.

Passing the listener's port number to backend servers

The listener's port number is stored in the X-Forwarded-Port header and passed to backend servers.

Passing the client's port number to backend servers

The client's port number is stored in the X-Forwarded-For-Port header and passed to backend servers.

Rewriting X-Forwarded-Host

  • If you disable this option, the load balancer passes the X-Forwarded-Host field to backend servers.
  • If you enable this option, the load balancer rewrites the X-Forwarded-Host field based on the Host field in the request header sent from the client and sends the rewritten X-Forwarded-Host field to backend servers.

Table 5 Other features

Feature

Description

Dedicated Load Balancers

Shared Load Balancers

Customized cross-AZ deployment

You can create a load balancer in multiple AZs. Each AZ selects an optimal path to process requests. In addition, the AZs back up each other, improving service processing efficiency and reliability.

If you deploy a load balancer in multiple AZs, its performance such as the number of new connections and the number of concurrent connections will multiply. For example, if you deploy a dedicated load balancer in two AZs, it can handle up to 40 million concurrent connections.

NOTE:
  • If requests are from the Internet, the load balancer in each AZ you select routes the requests based on source IP addresses. If you deploy a load balancer in two AZs, the requests the load balancers can handle will be doubled.
  • For requests from a private network:
    • If clients are in an AZ you select when you create the load balancer, requests are distributed by the load balancer in this AZ. If the load balancer is unhealthy, requests are distributed by the load balancer in another AZ you select.

      If the load balancer is healthy but the connections that the load balancer needs to handle exceed the amount defined in the specifications, service may be interrupted. To address this issue, you need upgrade specifications. You can monitor traffic usage on private network by AZ.

    • If clients are in an AZ that is not selected when you create the load balancer, requests are distributed by the load balancer in each AZ you select based on source IP addresses.
  • If requests are from a Direct Connect connection, the load balancer in the same AZ as the Direct Connect connection routes the requests. If the load balancer is unavailable, requests are distributed by the load balancer in another AZ.
  • If clients are in a VPC that is different from where the the load balancer works, the load balancer in the AZ where the original VPC subnet resides routes the requests. If the load balancer is unavailable, requests are distributed by the load balancer in another AZ.

x

Connection ID

Load balancers can use the connection ID algorithm to route requests. The connection ID in the packet is calculated using the consistent hash algorithm to obtain a specific value, and backend servers are numbered. The generated value determines to which backend server the requests are routed.

x

Load balancing algorithms

Load balancers support weighted round robin, weighted least connections, and source IP hash.

Load balancing over public and private networks

  • Each load balancer on a public network has a public IP address bound to it and routes requests from clients to backend servers over the Internet.
  • Load balancers on a private network work within a VPC and route requests from clients to backend servers in the same VPC.

Modifying the bandwidth

You can modify the bandwidth used by the EIP bound to the load balancer as required.

Binding/Unbinding an IP address

You can bind an IP address to a load balancer or unbind the IP address from a load balancer based on service requirements.

Sticky session

If you enable sticky sessions, requests from the same client will be routed to the same backend server during the session.

Access control

You can add IP addresses to a whitelist or blacklist to control access to a listener.

  • A whitelist allows specified IP addresses to access the listener.
  • A blacklist denies access from specified IP addresses.

Health check

Load balancers periodically send requests to backend servers to check whether they can process requests.

Certificate management

You can create two types of certificates: server certificate and CA certificate. If you need an HTTPS listener, you need to bind a server certificate to it. To enable mutual authentication, you also need to bind a CA certificate to the listener. You can also replace a certificate that is already used by a load balancer.

Tagging

If you have a large number of cloud resources, you can assign different tags to the resources to quickly identify them and use these tags to easily manage your resources.

Monitoring

You can use Cloud Eye to monitor load balancers and associated resources and view metrics on the management console.

Log auditing

You can use Cloud Trace Service (CTS) to record operations on load balancers and associated resources for query, auditing, and backtracking.