Applications Created Through Component Templates
If the application in CCE 1.0 is created using a component template, follow the steps this section to migrate the application to CCE 2.0.
Migration Method
Create workloads in the CCE 2.0 console. Delete applications from in CCE 1.0 only after applications are successfully run in CCE 2.0.
Procedure
- In the navigation pane, choose Workloads > Deployments. Click Create Deployment.
- Set basic workload parameters as described in Table 1. The parameters marked with an asterisk (*) are mandatory.
Table 1 Basic parameters Name
Configuration
* Workload Name
Name of a workload, which must be unique.
* Cluster Name
Cluster to which the workload belongs.
* Namespace
Namespace to which the new workload belongs. By default, this parameter is set to default.
* Instances
Number of instances in the workload. Each workload has at least one instance. You can specify the number of instances as required.
Each workload pod consists of the same containers. You can configure multiple pods for a workload to ensure high reliability. For such a workload, if one pod is faulty, the workload can still run properly.
Time Zone Synchronization
If this parameter is enabled, the container and the node use the same time zone.
NOTICE:After time zone synchronization is enabled, disks of the hostPath type will be automatically added and listed in the Data Storage > Local Volume area. Do not modify or delete the disks.
Description
Description of the workload.
- Click Next to add a container.
- Click Add Container and select the image to be deployed.
- Set image parameters according to Table 2.
Table 2 Setting image parameters Parameter in CCE 2.0
Parameter in CCE 1.0
Configuration
Image Name
Container Images
Name of the image. You can click Change Image to select another image.
Image Version
Version of the image to be deployed.
Container name.
Container name, which is modifiable.
Privileged Container
Programs in a privileged container have certain privileges.
If Privileged Container is enabled, the container is granted superuser permissions. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters.
Container Specifications
Memory and CPU
CPU Quota:
- Request: minimum number of CPU cores required by a container. The default value is 0.25 cores.
- Limit: maximum number of CPU cores available for a container. Do not leave Limit unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.
Memory Quota:
- Request: minimum amount of memory required by a container. The default value is 512 MiB.
- Limit: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated.
For details, see Setting Container Specifications.
GPU: configurable only when the cluster contains GPU nodes.- GPU: the percentage of GPU resources reserved for a container. Select Use and set the percentage. For example, if this parameter is set to 10%, the container is allowed to use 10% of GPU resources. If you do not select Use or set this parameter to 0, no GPU resources can be used.
- GPU/Graphics Card: The workload's pods will be scheduled to the node with the specified GPU. If Any GPU type is selected, the container uses a random GPU in the node. If you select a specific GPU, the container uses that GPU accordingly.
- Set the environment variables, data storage, and user-defined log.
Table 3 Configuring advanced settings Parameter in CCE 2.0
Parameter in CCE 1.0
Configuration
Lifecycle
This parameter does not exist in CCE 1.0. For migrated applications, you do not need to set this parameter.
Set the commands for starting and running containers.- Startup Command: Executed to start a container. For details, see Setting Container Startup Commands.
- Post-Start: Executed after a container runs successfully. For details, see Setting Container Lifecycle Parameters.
- Pre-Stop: Executed to delete logs or temporary files before a container is stopped. For details, see Setting Container Lifecycle Parameters.
Health check
This parameter does not exist in CCE 1.0. For migrated applications, you do not need to set this parameter.
Configure the health check function to check whether containers and services are running properly. Two types of probes are provided: liveness probes and readiness probes. For details, see Setting Health Check for a Container.- Liveness Probe: used to restart the unhealthy container.
- Readiness Probe: used to change the container to the unready state when detecting that the container is unhealthy. In this way, service traffic will not be directed to the container.
Adding environment variables
Adding environment variables
In the Environment Variables area, click Add Environment Variable. Currently, environment variables can be added using any of the following methods:- Added manually: Set Variable Name and Variable Value.
- Added from Secret: Set Variable Name and select the desired secret name and data. You need to create a secret in advance. For details, see Creating a Secret.
- Added from ConfigMap: Set Variable Name and select the desired ConfigMap name and data. You need to create a ConfigMap in advance. For details, see Creating a ConfigMap.
Data Storage
Volume
For the applications using the old component template, perform the following operations:
- Choose Data Storage > Local Volume. Click Add Local Volume.
- Select HostPath.
- Set the following parameters:
- Host Path: Path of the host to which the local volume is to be mounted, corresponding to /tmp of volumes.
- Click Add Container Path, enter the container path to which the data volume is mounted. It corresponds to /test of Volumes.
- Permission: Read/Write.
- When the configuration is complete, click OK.
Security Settings
This parameter does not exist in CCE 1.0. For migrated applications, you do not need to set this parameter.
Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected.
Container Logs
This parameter does not exist in CCE 1.0. For migrated applications, you do not need to set this parameter.
Set a policy and log directory for collecting application logs and preventing logs from being over-sized. For details, see Collecting Container Logs from Specified Paths.
- Click Next. Click Add Service and set the workload access type.
If your workload will be reachable to other workloads or public networks, add a Service to define the workload access type.
The workload access type determines the network attributes of the workload. Workloads with different access types can provide different network capabilities. For details, see Network Management.
- Access Type: Select LoadBalancer (ELB).
- Service Name: Specify a service name. You can use the workload name as the service name.
- Service Affinity:
- Cluster level: External traffic is routed to all nodes in the cluster while masking clients' source IP addresses.
- Node level: External traffic is routed to the node where the load balancer used by the service is located, without masking clients' source IP addresses.
- Elastic Load Balancer: A load balancer automatically distributes Internet access traffic to multiple nodes running the workload. The selected or created load balancer must be in the same VPC as the current cluster, and it must match the load balancer type (private or public network).
- Public network: You can select an existing public network load balancer or have the system automatically create a new public network load balancer.
If you have the system automatically create a public network load balancer, you can click Change Configuration to modify its name, EIP type, billing mode, and bandwidth.
- Private network: You can select an existing private network load balancer or have the system automatically create a new private network load balancer.
- Public network: You can select an existing public network load balancer or have the system automatically create a new public network load balancer.
- Port Settings
- Protocol: protocol used by the Service.
- Container Port: a port that is defined in the container image and on which the workload listens. The Nginx application listens on port 80.
- Access Port: a port mapped to the container port at the load balancer's IP address. The workload can be accessed at <load balancer's IP address>:<access port>. The port number range is 1–65535.
- Click OK, and then click Next. Skip advanced settings.
- Click Create after the configuration is complete. Click Back to Deployment List.
If the deployment is in the Running state, the deployment is successfully created.
Workload status is not updated in real time. Click in the upper right corner or press F5 to refresh the page.
- To access the workload in a browser, copy the workload's External Access Address and paste it into the address box in the browser.
External access addresses are available only if the Deployment access type is set to NodePort and an EIP is assigned to any node in the cluster, or if the Deployment access type is set to LoadBalancer (ELB).
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot