Creating a LoadBalancer Ingress on the Console
In Kubernetes, an ingress is a resource object that controls how Services within a cluster can be accessed from outside the cluster. You can use ingresses to configure different forwarding rules to access pods in a cluster. The following uses an Nginx workload as an example to describe how to create a LoadBalancer ingress on the console.
Prerequisites
- An available workload has been deployed in the cluster for external access. If no workload is available, deploy a workload by referring to Creating a Deployment, Creating a StatefulSet, or Creating a DaemonSet.
- A Service for external access has been configured for the workload. Services Supported by LoadBalancer Ingresses lists the Service types supported by LoadBalancer ingresses.
Notes and Constraints
- It is recommended that other resources not use the load balancer automatically created by an ingress. Otherwise, the load balancer will be occupied when the ingress is deleted, resulting in residual resources.
- After an ingress is created, upgrade and maintain the configuration of the selected load balancers on the CCE console. Do not modify the configuration on the ELB console. Otherwise, the ingress service may be abnormal.
- The URL registered in an ingress forwarding policy must be the same as the URL used to access the backend Service. Otherwise, a 404 error will be returned.
- In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. Use different load balancers for the ingress and Service.
- Do not connect an ingress and a Service that uses HTTP to the same listener of the same load balancer. Otherwise, a port conflict occurs.
- A dedicated load balancer must be of the application type (HTTP/HTTPS) and support private networks (with a private IP address).
- If multiple ingresses access the same ELB port in a cluster, the listener configuration items (such as the certificate associated with the listener and the HTTP/2 attribute of the listener) are subject to the configuration of the first ingress.
Adding a LoadBalancer Ingress
This section uses an Nginx workload as an example to describe how to add a LoadBalancer ingress.
- Log in to the CCE console and click the cluster name to access the cluster console.
- Choose Services & Ingresses in the navigation pane, click the Ingresses tab, and click Create Ingress in the upper right corner.
- Configure ingress parameters.
- Name: Customize the name of an ingress, for example, ingress-demo.
- Interconnect with Nginx: This option is displayed only after the NGINX Ingress Controller add-on is installed. If this option is available, the NGINX Ingress Controller add-on has been installed. Enabling this option will create an Nginx ingress. Disable it if you want to create a LoadBalancer ingress. For details, see Creating an Nginx Ingress on the Console.
- Load Balancer: Select a load balancer type and creation mode.
A load balancer can be dedicated or shared. A dedicated load balancer must be of the application type (HTTP/HTTPS) and support private networks.
You can select Use existing or Auto create to obtain a load balancer. For details about the configuration of different creation modes, see Table 1.Table 1 Load balancer configurations How to Create
Configuration
Use existing
Only the load balancers in the same VPC as the cluster can be selected. If no load balancer is available, click Create Load Balancer to create one on the ELB console.
Auto create
- Instance Name: Enter a load balancer name.
- Enterprise Project: This parameter is available only for enterprise users who have enabled an enterprise project. Enterprise projects facilitate project-level management and grouping of cloud resources and users.
- AZ: available only to dedicated load balancers. You can create load balancers in multiple AZs to improve service availability. You can deploy a load balancer in multiple AZs for high availability.
- Frontend Subnet: available only to dedicated load balancers. It is used to allocate IP addresses for load balancers to provide services externally.
- Backend Subnet: available only to dedicated load balancers. It is used to allocate IP addresses for load balancers to access the backend service.
- Network Specifications, Application-oriented Specifications, or Specifications (available only to dedicated load balancers)
- Fixed: applies to stable traffic, billed based on specifications.
- EIP: If you select Auto create, you can configure the billing mode and size of the public network bandwidth.
- Resource Tag: You can add resource tags to classify resources. You can create predefined tags on the TMS console. The predefined tags are available to all resources that support tags. You can use these tags to improve the tag creation and resource migration efficiency.
- Listener: An ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is k8s__<Protocol type>_<Port number>, for example, k8s_HTTP_80.
- External Protocol: HTTP and HTTPS are available.
- External Port: port number that is open to the ELB service address. The port number is configurable.
- Access Control
- Allow all IP addresses: No access control is configured.
- Trustlist: Only the selected IP address group can access the load balancer.
- Blocklist: The selected IP address group cannot access the load balancer.
- Certificate Source: TLS secret and ELB server certificates are supported.
- TLS secret: For details about how to create a secret certificate, see Creating a Secret.
- ELB server certificate: Use a certificate created in the ELB service.
- Server Certificate: When an HTTPS listener is created for a load balancer, bind a certificate to the load balancer to support encrypted authentication for HTTPS data transmission.
If there is already an HTTPS ingress for the chosen port on the load balancer, the certificate of the new HTTPS ingress must be the same as the certificate of the existing ingress. This means that a listener has only one certificate. If two certificates, each with a different ingress, are added to the same listener of the same load balancer, only the certificate added earliest takes effect on the load balancer.
- SNI: stands for Server Name Indication (SNI), which is an extended protocol of TLS. SNI allows multiple TLS-compliant domain names for external access using the same IP address and port number, and different domain names can use different security certificates. After SNI is enabled, the client is allowed to submit the requested domain name when initiating a TLS handshake request. After receiving the TLS request, the load balancer searches for the certificate based on the domain name in the request. If the certificate corresponding to the domain name is found, the load balancer returns the certificate for authorization. Otherwise, the default certificate (server certificate) is returned for authorization.
- The SNI option is available only when HTTPS is used.
- This function is supported only in clusters of v1.15.11 or later.
- Only one domain name can be specified for each SNI certificate. Wildcard-domain certificates are supported.
- For ingresses connected to the same ELB port, do not configure SNIs with the same domain name but different certificates. Otherwise, the SNIs will be overwritten.
- Security Policy: combinations of different TLS versions and supported cipher suites available to HTTPS listeners.
For details about security policies, see Security Policy.
- Security Policy is available only when HTTPS is selected.
- This function is supported only in clusters of v1.17.9 or later.
- Backend Protocol:
When the listener is HTTP-compliant, only HTTP can be selected.
If it is an HTTPS listener, this parameter can be set to HTTP, gRPC, or HTTPS. Only dedicated load balancers support gRPC. After HTTP/2 is enabled, CCE will automatically add the kubernetes.io/elb.http2-enable:true annotation. gRPC is available only in certain regions. For details, see the CCE console.
- Advanced Options
Configuration
Description
Restrictions
Transfer Listener Port Number
If this function is enabled, the listening port on the load balancer can be transferred to backend servers through the HTTP header of the packet.
This function is available only for dedicated load balancers.
Transfer Port Number in the Request
If this function is enabled, the source port of the client can be transferred to backend servers through the HTTP header of the packet.
This function is available only for dedicated load balancers.
Rewrite X-Forwarded-Host
If this function is enabled, X-Forwarded-Host will be rewritten using the Host field in the client request header and transferred to backend servers.
This function is available only for dedicated load balancers.
Data Compression
If this function is enabled, specific files will be compressed. If you do not enable this function, files will not be compressed.
This function is available only for dedicated load balancers.
Idle Timeout
Timeout for an idle client connection. If there are no requests reaching the load balancer during the timeout duration, the load balancer will disconnect the connection from the client and establish a new connection when there is a new request.
None
Request Timeout
Timeout for waiting for a request from a client. There are two cases:
- If the client fails to send a request header to the load balancer during the timeout duration, the request will be interrupted.
- If the interval between two consecutive request bodies reaching the load balancer is greater than the timeout duration, the connection will be disconnected.
None
Response Timeout
Timeout for waiting for a response from a backend server. After a request is forwarded to the backend server, if the backend server does not respond during the timeout duration, the load balancer will stop waiting and return HTTP 504 Gateway Timeout.
None
HTTP2
Whether to use HTTP/2 for a client to communicate with a load balancer. Request forwarding using HTTP/2 improves the access performance between your application and the load balancer. However, the load balancer still uses HTTP/1.x to forward requests to the backend server.
This function is available only when the listener is HTTPS-compliant.
- Redirect to HTTPS: When HTTP is selected for External Protocol, traffic can be redirected to HTTPS. After this function is enabled, you can click Modify to configure the HTTPS port. For details, see the HTTPS-compliant listener configuration.
- Gray release: After an ingress is created, you can create a grayscale release policy in the Operation column of the ingress. For details, see Configuring Grayscale Release for a LoadBalancer Ingress.
- Forwarding Policy: When the access address of a request matches the forwarding policy (a forwarding policy consists of a domain name and URL, for example, 10.117.117.117:80/helloworld), the request is forwarded to the corresponding target Service for processing. You can click to add multiple forwarding policies.
- Domain Name: Enter an actual domain name to be accessed. If it is left blank, the ingress can be accessed through the IP address. Ensure that the domain name has been registered and licensed. Once a forwarding policy is configured with a domain name specified, you must use the domain name for access.
- Path Matching Rule:
- Prefix match: If the URL is set to /healthz, the URL that meets the prefix can be accessed, for example, /healthz/v1 and /healthz/v2.
- Exact match: The URL can be accessed only when it is fully matched. For example, if the URL is set to /healthz, only /healthz can be accessed.
- RegEX match: The URL is matched based on the regular expression. For example, if the regular expression is /[A-Za-z0-9_.-]+/test, all URLs that comply with this rule can be accessed, for example, /abcA9/test and /v1-Ab/test. Two regular expression standards are supported: POSIX and Perl.
- Path: access path, for example, /healthz
The access path added here must exist in the backend application. Otherwise, the forwarding fails.
For example, the default access URL of the Nginx application is /usr/share/nginx/html. When adding /test to the ingress forwarding policy, ensure the access URL of your Nginx application contains /usr/share/nginx/html/test. Otherwise, error 404 will be returned.
- Destination Service: Select an existing Service or create a Service. Any Services that do not match the search criteria will be filtered out automatically.
- Destination Service Port: Select the access port of the destination Service.
- Set ELB:
- Algorithm: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash.
- Weighted round robin: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services.
- Weighted least connections: In addition to the weight assigned to each server, the number of connections processed by each backend server is considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on least connections, the weighted least connections algorithm assigns a weight to each server based on their processing capability. This algorithm is often used for persistent connections, such as database connections.
- Source IP hash: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This enables requests from different clients to be distributed in load balancing mode and ensures that requests from the same client are forwarded to the same server. This algorithm applies to TCP connections without cookies.
- Sticky Session: This function is disabled by default. Options are as follows:
- Load balancer cookie: Enter the Stickiness Duration , which ranges from 1 to 1440 minutes.
- Application cookie: This parameter is available only for shared load balancers. In addition, enter Cookie Name, which ranges from 1 to 64 characters.
- When the distribution policy uses the source IP hash, sticky session cannot be set.
- Dedicated load balancers in the clusters of a version earlier than v1.21 do not support sticky sessions. If sticky sessions are required, use shared load balancers.
- Health Check: Set the health check configuration of the load balancer. If this function is enabled, the following configurations are supported:
Parameter
Description
Protocol
When the protocol of the target Service port is TCP, more protocols including gRPC and HTTP are supported. Only dedicated load balancers support gRPC. gRPC is available only in certain regions. For details, see the CCE console.
- Check Path (supported only by HTTP or gRPC for health check): specifies the health check URL. The check path must start with a slash (/) and contain 1 to 80 characters.
Port
By default, the service port (NodePort or container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named cce-healthz will be added for the Service.
- Node Port: If a shared load balancer is used or no ENI instance is associated, the node port is used as the health check port. If this parameter is not specified, a random port is used. The value ranges from 30000 to 32767.
- Container Port: When a dedicated load balancer is associated with an ENI instance, the container port is used for health check. The value ranges from 1 to 65535.
Check Period (s)
Specifies the maximum interval between health checks. The value ranges from 1 to 50.
Timeout (s)
Specifies the maximum timeout duration for each health check. The value ranges from 1 to 50.
Max. Retries
Specifies the maximum number of health check retries. The value ranges from 1 to 10.
- Algorithm: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash.
- Operation: Click Delete to delete the configuration.
- Actions: Redirect to URL and Rewrite URL are available only for dedicated load balancers.
- Redirect to URL: When an access request meets the forwarding policy, the request will be redirected to a specified URL, and a specific status code will be returned.
- Rewrite URL: When an access request meets the forwarding policy, the URL will be rewritten based on the matching rule. You are allowed to configure a regular expression for the patch matching rule, and the result obtained using the regular expression can be used for rewriting the URL. For example, the regular expression configured in the forwarding rule is /first/(.*)/(.*)/end, and the rewrite URL is set to /${1}/${2}. When the client URL for sending requests is /first/aaa/bbb/end, the forwarding rule matches /first/(.*)/(.*)/end. Then, ${1} in the rewrite URL will be replaced with aaa and ${2} replaced with bbb, the request path received by the backend server will be /aaa/bbb.
- Annotation: Ingresses provide some advanced CCE functions, which are implemented by annotations. When you use kubectl to create a container, annotations will be used. For details, see Automatically Creating a Load Balancer While Creating an Ingress or Associating an Existing Load Balancer to an Ingress While Creating the Ingress.
- Click OK. After the ingress is created, it is displayed in the ingress list.
On the ELB console, you can check the load balancer automatically created through CCE. The default name is cce-lb-<ingress.UID>. Click the load balancer name to go to the details page. On the Listeners tab page, check the listener and forwarding policy of the target ingress.
After an ingress is created, upgrade and maintain the selected load balancer on the CCE console. Do not modify the configuration on the ELB console. Otherwise, the ingress service may be abnormal.
Figure 1 LoadBalancer ingress configuration
- Access the /healthz interface of the workload, for example, workload defaultbackend.
- Obtain the access address of the /healthz interface of the workload. The access address consists of the load balancer IP address, external port, and mapping URL, for example, 10.**.**.**:80/healthz.
- Enter the URL of the /healthz interface, for example, http://10.**.**.**:80/healthz, in the address box of the browser to access the workload, as shown in Figure 2.
Related Operations
The Kubernetes ingress structure does not contain the property field. Therefore, the ingress created by the API called by client-go does not contain the property field. CCE provides a solution to ensure compatibility with the Kubernetes client-go. For details about the solution, see How Can I Achieve Compatibility Between Ingress's property and Kubernetes client-go?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot