Creating a Deployment (Nginx)
You can use images to quickly create a single-pod workload that can be accessed from public networks. This section describes how to use CCE to quickly deploy an Nginx application and manage its lifecycle.
Prerequisites
You have created a CCE cluster that contains a node with 4 vCPUs and 8 GiB memory. The node is bound with an EIP.
A cluster is a logical group of cloud servers that run workloads. Each cloud server is a node in the cluster.
For details on how to create a cluster, see Creating a Kubernetes Cluster.
Nginx Overview
Nginx is a lightweight web server. On CCE, you can quickly set up a Nginx web server.
This section uses the Nginx application as an example to describe how to create a workload. The creation takes about 5 minutes.
After Nginx is created successfully, you can access the Nginx web page.
Creating Nginx on the Console
The following is the procedure for creating a containerized workload from a container image.
- Log in to the CCE console.
- Choose the target cluster.
- In the navigation pane, choose Workloads. Then, click Create Workload.
- Configure the following parameters and retain the default value for other parameters:
Basic Info
- Workload Type: Select Deployment.
- Workload Name: Set it to nginx.
- Namespace: Select default.
- Pods: Set the quantity of pods to 1.
Container Settings
In the Basic Info area of Container Information, click Select Image. In the dialog box displayed, select Open Source Images, search for nginx, and select the nginx image.
Figure 2 Selecting the nginx image
Service Settings
Click the plus sign (+) to create a Service for accessing the workload from an external network. In this example, create a LoadBalancer Service. Configure the following parameters:
- Service Name: name of the Service exposed to external networks. In this example, the Service name is nginx.
- Access Type: Select LoadBalancer.
- Service Affinity: Retain the default value.
- Load Balancer: If a load balancer is available, select an existing load balancer. If not, select Auto create to create one.
- Port:
- Protocol: Select TCP.
- Service Port: Set this parameter to 8080, which is mapped to the container port.
- Container Port: port on which the application listens. For containers created using the nginx image, set this parameter to 80. For other applications, set this parameter to the port of the application.
Figure 3 Creating a Service
- Click Create Workload.
Wait until the workload is created.
The created Deployment will be displayed on the Deployments tab.
Figure 4 Workload created successfully
Accessing Nginx
- Obtain the external access address of Nginx.
Click the Nginx workload to enter its details page. On the Access Mode tab, view the IP address of Nginx. The public IP address is the external access address.Figure 5 Obtaining the external access address
- Enter the external access address in the address box of a browser. The following shows the welcome page if you successfully access the workload.
Figure 6 Accessing Nginx
Creating Nginx Using kubectl
This section describes how to use kubectl to create a Deployment and expose the Deployment to the Internet through a LoadBalancer Service.
- Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
- Create a description file named nginx-deployment.yaml. nginx-deployment.yaml is an example file name. You can rename it as required.
vi nginx-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:alpine name: nginx imagePullSecrets: - name: default-secret
- Create a Deployment.
kubectl create -f nginx-deployment.yaml
If the following information is displayed, the Deployment is being created.
deployment "nginx" created
Check the Deployment.
kubectl get deployment
If the following information is displayed, the Deployment is running.
NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 4m5s
Parameter description
- NAME: specifies the name of a workload.
- READY: indicates the number of available pods/expected pods for the workload.
- UP-TO-DATE: indicates the number of replicas that have been updated.
- AVAILABLE: indicates the number of available pods.
- AGE: indicates the running period of the Deployment.
- Create a description file named nginx-elb-svc.yaml. Change the value of selector to that of matchLabels (app: nginx in this example) in the nginx-deployment.yaml file to associate the Service with the backend application.
For details about the parameters in the following example, see Using kubectl to Create a Service (Automatically Creating a Load Balancer).
apiVersion: v1 kind: Service metadata: annotations: kubernetes.io/elb.class: union kubernetes.io/elb.autocreate: '{ "type": "public", "bandwidth_name": "cce-bandwidth", "bandwidth_chargemode": "bandwidth", "bandwidth_size": 5, "bandwidth_sharetype": "PER", "eip_type": "5_bgp" }' labels: app: nginx name: nginx spec: ports: - name: service0 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer
- Create a Service.
kubectl create -f nginx-elb-svc.yaml
If information similar to the following is displayed, the Service has been created.
service/nginx created
kubectl get svc
If information similar to the following is displayed, the access type has been configured, and the workload is accessible.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 3d nginx LoadBalancer 10.247.130.196 10.78.42.242 80:31540/TCP 51s
- Enter the URL (for example, 10.78.42.242:80) in the address box of a browser. 10.78.42.242 indicates the IP address of the load balancer, and 80 indicates the access port displayed on the CCE console.
Nginx is accessible.
Figure 7 Accessing Nginx through the LoadBalancer Service
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.