Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Creating a Container Workload

Updated on 2024-01-26 GMT+08:00

This section describes how to deploy a workload on CCE. When using CCE for the first time, create an initial cluster and add a node into the cluster.

NOTE:

Containerized workloads are deployed in a similar way. The difference lies in:

  • Whether environment variables need to be set.
  • Whether cloud storage is used.

Required Cloud Services

  • Cloud Container Engine (CCE): a highly reliable and high-performance service that allows enterprises to manage containerized applications. With support for Kubernetes-native applications and tools, CCE makes it simple to set up an environment for running containers in the cloud.
  • Elastic Cloud Server (ECS): a scalable and on-demand cloud server. It helps you to efficiently set up reliable, secure, and flexible application environments, ensuring stable service running and improving O&M efficiency.
  • Virtual Private Cloud (VPC): an isolated and private virtual network environment that users apply for in the cloud. You can configure the IP address ranges, subnets, and security groups, as well as assign elastic IP addresses and allocate bandwidth in a VPC.

Basic Concepts

  • A cluster is a collection of computing resources, including a group of node resources. A container runs on a node. Before creating a containerized application, you must have an available cluster.
  • A node is a virtual or physical machine that provides computing resources. You must have sufficient node resources to ensure successful operations such as creating applications.
  • A workload indicates a group of container pods running on CCE. CCE supports third-party application hosting and provides the full lifecycle (from deployment to O&M) management for applications. This section describes how to use a container image to create a workload.

Procedure

  1. Prepare the environment as described in Table 1.

    Table 1 Preparing the environment

    No.

    Category

    Procedure

    1

    Creating a VPC

    Create a VPC before you create a cluster. A VPC provides an isolated, configurable, and manageable virtual network environment for CCE clusters.

    If you have a VPC already, skip to the next task.

    1. Log in to the management console.
    2. In the service list, choose Networking > Virtual Private Cloud.
    3. On the Dashboard page, click Create VPC.
    4. Follow the instructions to create a VPC. Retain default settings for parameters unless otherwise specified.

    2

    Creating a key pair

    Create a key pair before you create a containerized application. Key pairs are used for identity authentication during remote login to a node. If you have a key pair already, skip this task.

    1. Log in to the management console.
    2. In the service list, choose Data Encryption Workshop under Security & Compliance.
    3. In the navigation pane, choose Key Pair Service. On the Private Key Pairs tab, click Create Key Pair.
    4. Enter a key pair name, select I agree to have the private key managed on the cloud and I have read and agree to the Key Pair Service Disclaimer, and click OK.
    5. In the dialog box displayed, click OK.

      View and save the key pair. For security purposes, a key pair can be downloaded only once. Keep it secure to ensure successful login.

  2. Create a cluster and a node.

    1. Log in to the CCE console. Choose Clusters. On the displayed page, select the type of the cluster to be created and click Create.

      Configure cluster parameters and select the VPC created in 1.

    2. Buy a node and select the key pair created in 1 as the login mode.

  3. Deploy a workload on CCE.

    1. Log in to the CCE console, click the created cluster, choose Workloads in the navigation pane, and click Create Workload in the upper right corner.
    2. Configure the following parameters, and retain the default settings for other parameters:
      • Workload Name: Set it to apptest.
      • Pods: Set it to 1.
    3. In the Container Settings area, select the image uploaded in Building and Uploading an Image.
    4. In the Container Settings area, choose Environment Variables and add environment variables for interconnecting with the MySQL database. The environment variables are set in the startup script.
      NOTE:

      In this example, interconnection with the MySQL database is implemented through configuring the environment variables. Determine whether to use environment variables based on your service requirements.

      Table 2 Configuring environment variables

      Variable Name

      Variable Value/Variable Reference

      MYSQL_DB

      Database name.

      MYSQL_URL

      IP address and port number of the database.

      MYSQL_USER

      Database username.

      MYSQL_PASSWORD

      Database user password.

    5. In the Container Settings area, choose Data Storage and configure cloud storage for persistent data storage.
      NOTE:

      In this example, the MongoDB database is used and persistent data storage is also needed, so you need to configure cloud storage. Determine whether to use cloud storage based on your service requirements.

      The mounted path must be the same as the MongoDB storage path in the Docker startup script. For details, see the startup script. In this example, the path is /usr/local/mongodb/data.

    6. In the Service Settings area, click to add a service, configure workload access parameters, and click OK.
      NOTE:

      In this example, the application will be accessible from public networks by using an elastic IP address.

      • Service Name: name of the application that can be accessed externally. In this example, this parameter is set to apptest.
      • Service Type: In this example, select NodePort.
      • Service Affinity
        • Cluster-level: The IP addresses and access ports of all nodes in a cluster can be used to access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained.
        • Node-level: Only the IP address and access port of the node where the workload is located can be used to access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained.
      • Port
        • Protocol: Set it to TCP.
        • Service Port: port for accessing the Service.
        • Container Port: port that the application will listen on the container. In this example, this parameter is set to 8080.
        • Node Port: Set it to Auto. The system automatically opens a real port on all nodes in the current cluster and then maps the port number to the container port.
    7. Click Create Workload.

      After the workload is created, you can view the running workload in the workload list.

Verifying a Workload

After a workload is created, you can access the workload to check whether the deployment is successful.

In the preceding configuration, the NodePort mode is selected to access the workload by using IP address:Port number. If the access is successful, the workload is successfully deployed.

You can obtain the access mode from the Access Mode tab on the workload details page.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback