Cloud Container Engine
Cloud Container Engine
- Function Overview
- Product Bulletin
- Service Overview
- Billing
- Getting Started
-
User Guide
- Clusters
- Workloads
- Network
- Storage
- O&M
- Namespaces
- ConfigMaps and Secrets
- Auto Scaling
- Add-ons
- Helm Chart
- Permissions
- Settings
- Best Practices
-
API Reference
- Before You Start
- API Overview
- Calling APIs
-
APIs
- Autopilot Cluster Management
- Add-on Management for Autopilot Clusters
-
Autopilot Cluster Upgrade
- Upgrading a Cluster
- Obtaining Cluster Upgrade Task Details
- Retrying a Cluster Upgrade Task
- Obtaining a List of Cluster Upgrade Task Details
- Performing a Pre-upgrade Check for a Cluster
- Obtaining Details About a Pre-upgrade Check Task of a Cluster
- Obtaining a List of Pre-upgrade Check Tasks of a Cluster
- Performing a Post-upgrade Check for a Cluster
- Backing Up a Cluster
- Obtaining a List of Cluster Backup Task Details
- Obtaining the Cluster Upgrade Information
- Obtaining a Cluster Upgrade Path
- Obtaining the Configuration of Cluster Upgrade Feature Gates
- Enabling the Cluster Upgrade Process Booting Task
- Obtaining a List of Upgrade Workflows
- Obtaining Details About a Specified Cluster Upgrade Booting Task
- Updating the Status of a Specified Cluster Upgrade Booting Task
- Quota Management for Autopilot Clusters
- Tag Management for Autopilot Clusters
-
Chart Management for Autopilot Clusters
- Uploading a Chart
- Obtaining a Chart List
- Obtaining a Release List
- Creating a Release
- Updating a Chart
- Deleting a Chart
- Updating a Release
- Obtaining a Chart
- Deleting a Release
- Obtaining a Release
- Downloading a Chart
- Obtaining Chart Values
- Obtaining Historical Records of a Release
- Obtaining the Quota of a User Chart
- Kubernetes APIs
- Permissions and Supported Actions
- Appendix
-
FAQs
- Billing
- Workloads
- Network Management
-
Storage
- Can PVs of the EVS Type in a CCE Autopilot Cluster Be Restored After They Are Deleted or Expire?
- What Can I Do If a Storage Volume Fails to Be Created?
- Can CCE Autopilot PVCs Detect Underlying Storage Faults?
- How Can I Delete the Underlying Storage If It Remains After a Dynamically Created PVC is Deleted?
- Permissions
- General Reference
On this page
Show all
Help Center/
Cloud Container Engine_Autopilot/
User Guide/
Network/
Ingresses/
Nginx Ingresses/
Creating an Nginx Ingress on the Console
Copied.
Creating an Nginx Ingress on the Console
Prerequisites
- A workload is available in the cluster (because an Ingress enables network access for workloads). If no workload is available, deploy a workload by referring to Creating a Workload.
- A ClusterIP Service has been configured for the workload. For details about how to configure the Service, see ClusterIP.
- To add an Nginx Ingress, ensure that the NGINX Ingress Controller add-on has been installed in the cluster. For details, see Installing the Add-on.
Constraints
- It is not recommended that you modify any configuration of a load balancer on the ELB console. If you modify the configuration on the ELB console, the Service will be abnormal. If you have modified the configuration, uninstall NGINX Ingress Controller and reinstall it.
- The URL specified in an Ingress forwarding policy must be the same as that used to access the backend Service. If the URL is not the same, 404 will be returned.
- The selected or created load balancer must be in the same VPC as the current cluster, and it must match the load balancer type (private or public network).
- The load balancer has at least two listeners, and ports 80 and 443 are not occupied by listeners.
Procedure
An Nginx workload is used as an example to describe how to create an Nginx Ingress.
- Log in to the CCE console and click the cluster name to access the cluster console.
- In the navigation pane on the left, choose Services & Ingresses. On the Ingresses tab, click Create Ingress in the upper right corner.
- Configure parameters.
- Name: Enter a name for the Ingress, for example, nginx-ingress-demo.
- Namespace: Select the namespace to which the Ingress is to be added.
- nginx-ingress: This option is displayed only after the NGINX Ingress Controller add-on is installed in the cluster.
- External Protocol: The options are HTTP and HTTPS. If NGINX Ingress Controller is installed, the default port is 80 for HTTP and 443 for HTTPS. To use HTTPS, configure a server certificate.
- Certificate Source: source of a certificate for encrypting and authenticating HTTPS data transmission.
- If you select a TLS key, you must create a key certificate of the IngressTLS or kubernetes.io/tls type beforehand. For details, see Creating a Secret.
- If you select the default certificate, NGINX Ingress Controller will use its default certificate for encryption and authentication.
- SNI: SNI an extended protocol of TLS. SNI allows multiple TLS-compliant domain names for external access using the same IP address and port, and different domain names can use different security certificates. If SNI is enabled, the client is allowed to submit the requested domain name when initiating a TLS handshake request. After receiving the TLS request, the load balancer searches for the certificate based on the domain name in the request. If the certificate corresponding to the domain name is found, the load balancer returns the certificate for authorization. Otherwise, the default certificate (server certificate) is returned for authorization.
- Forwarding Policy: When the access address of a request matches the forwarding policy (a forwarding policy consists of a domain name and URL), the request is forwarded to the corresponding target Service for processing. Click Add Forwarding Policies to add multiple forwarding policies.
- Domain Name: Enter the domain name used for access. Ensure that the entered domain name has been registered and archived. After the Ingress is created, bind the domain name to the IP address of the automatically created load balancer (IP address of the Ingress access address). If a domain name rule is configured, the domain name must always be used for access.
- Path Matching Rule
- Default: Prefix match is used by default.
- Prefix match: If the URL is set to /healthz, the URL that meets the prefix can be accessed, for example, /healthz/v1 and /healthz/v2.
- Exact match: The URL can be accessed only when it is fully matched. For example, if the URL is set to /healthz, only /healthz can be accessed.
- Path: access path, for example, /healthz.
NOTE:
- The access path matching rule of the Nginx Ingress is based on the path prefix separated by the slash (/) and is case-sensitive. If the subpath separated by a slash (/) matches the prefix, the access is normal. However, if the prefix is only a part of the character string in the subpath, the access is not matched. For example, if the URL is set to /healthz, /healthz/v1 is matched, but /healthzv1 is not matched.
- The access path added here must exist in the backend applications. If it does not exist, requests will fail to be forwarded.
For example, the default access URL of the Nginx application is /usr/share/nginx/html. When adding /test to the Ingress forwarding policy, ensure the access URL of your Nginx application contains /usr/share/nginx/html/test. Otherwise, 404 will be returned.
- Destination Service: Select an existing Service or create a Service. Services that do not meet search criteria are automatically filtered out.
- Destination Service Port: Select the access port of the destination Service.
- Operation: Click Delete to delete the configuration.
- Annotation: The value is in the format of key:value. You can use annotations to query the configurations supported by Nginx Ingress.
- Click OK.
After the Ingress is created, it is displayed in the Ingress list.
Parent Topic: Nginx Ingresses
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot