Easily Switch Between Product Types

You can click the drop-down list box to switch between different product types.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine/ Getting Started/ Deploying an Application in a CCE Cluster Using a Helm Chart

Deploying an Application in a CCE Cluster Using a Helm Chart

Updated on 2024-09-29 GMT+08:00

Helm is a package manager that streamlines the deployment, upgrade, and management of Kubernetes applications. Helm uses charts, which are a packaging format that defines Kubernetes resources, to package all components deployed by Kubernetes. This includes application code, dependencies, configuration files, and deployment instructions. By doing so, Helm enables the distribution and deployment of complex Kubernetes applications in a more efficient, consistent manner. Moreover, Helm facilitates application upgrade and rollback, simplifying application lifecycle management.

This section describes how to deploy a WordPress workload using Helm.

Procedure

Step

Description

Preparations

Register a Huawei account and top up the account.

Step 1: Enable CCE for the First Time and Perform Authorization

Obtain the required permissions for your account when you use the CCE service in the current region for the first time.

Step 2: Create a Cluster

Create a CCE cluster to provide Kubernetes services.

Step 3: Create a Node Pool and Nodes in the Cluster

Create a node in the cluster to run your containerized applications.

Step 4: Access the Cluster Using Kubectl

Before using Helm charts, access the cluster on a VM using kubectl.

Step 5: Install Helm

Install Helm on the VM with kubectl installed.

Step 6: Deploy the Template

Create a WordPress workload in the cluster using the Helm installation command and create a Service for the workload for Internet access.

Step 7: Access WordPress

Access the WordPress website from the Internet to start your blog.

Follow-up Operations: Releasing Resources

To avoid additional charges, delete the cluster resources promptly if you no longer require them after practice.

Preparations

Step 1: Enable CCE for the First Time and Perform Authorization

CCE works closely with multiple cloud services to support computing, storage, networking, and monitoring functions. When you log in to the CCE console for the first time, CCE automatically requests permissions to access those cloud services in the region where you run your applications. If you have been authorized in the current region, skip this step.

  1. Log in to the CCE console using your HUAWEI ID.
  2. Click in the upper left corner on the displayed page and select a region.
  3. When you log in to the CCE console in a region for the first time, wait for the Authorization Statement dialog box to appear, carefully read the statement, and click OK.

    After you agree to delegate the permissions, CCE creates an agency named cce_admin_trust in IAM to perform operations on other cloud resources and grants it the Tenant Administrator permissions. Tenant Administrator has the permissions on all cloud services except IAM. The permissions are used to call the cloud services on which CCE depends. The delegation takes effect only in the current region. You can go to the IAM console, choose Agencies, and click cce_admin_trust to view the delegation records of each region. For details, see Account Delegation.

    NOTE:

    CCE may fail to run as expected if the Tenant Administrator permissions are not assigned. Therefore, do not delete or modify the cce_admin_trust agency when using CCE.

Step 2: Create a Cluster

  1. Log in to the CCE console.

    • If you have no clusters, click Buy Cluster on the wizard page.
    • If you have CCE clusters, choose Clusters in the navigation pane, click Buy Cluster in the upper right corner.

  2. Configure basic cluster parameters.

    Only mandatory parameters are described in this example. You can keep the default values for most other parameters. For details about the parameter configurations, see Buying a CCE Standard/Turbo Cluster.

    Parameter

    Example

    Description

    Type

    CCE Standard Cluster

    CCE allows you to create various types of clusters for diverse needs. It provides highly reliable, secure, business-class container services.

    You can select CCE Standard Cluster or CCE Turbo Cluster as required.

    • CCE standard clusters provide highly reliable, secure, business-class containers.
    • CCE Turbo clusters use high-performance cloud native networks and provide cloud native hybrid scheduling. Such clusters have improved resource utilization and can be used in more scenarios.

    For details about cluster types, see Comparison Between Cluster Types.

    Billing Mode

    Pay-per-use

    Select a billing mode for the cluster.

    • Yearly/Monthly: a prepaid billing mode. Resources will be billed based on the service duration. This cost-effective mode is ideal when the duration of resource usage is predictable.

      If you choose this billing mode, you will need to set the desired duration and decide whether to enable automatic subscription renewal. Monthly subscriptions renew automatically every month, while yearly subscriptions renew automatically every year.

    • Pay-per-use: a postpaid billing mode. It is suitable for scenarios where resources will be billed based on usage frequency and duration. You can provision or delete resources at any time.

    For details, see Billing Modes.

    Cluster Name

    cce-test

    Name of the cluster to be created

    Enterprise Project

    default

    Enterprise projects facilitate project-level management and grouping of cloud resources and users. For more details, see Enterprise Management.

    This parameter is displayed only for enterprise users who have enabled Enterprise Project Management.

    Cluster Version

    The recommended version, for example, v1.29

    Select the latest commercial release for improved stability, reliability, new functionalities. CCE offers various versions of Kubernetes software.

    Cluster Scale

    Nodes: 50

    Configure the parameter as required. This parameter controls the maximum number of worker nodes that the cluster can manage. After the cluster is created, it can only be scaled out.

    Master Nodes

    3 Masters

    Select the number of master nodes. The master nodes are automatically hosted by CCE and deployed with Kubernetes cluster management components such as kube-apiserver, kube-controller-manager, and kube-scheduler.

    • 3 Masters: Three master nodes will be created for high cluster availability.
    • Single: Only one master node will be created in your cluster.

    This parameter cannot be changed after the cluster is created.

  3. Configure network parameters.

    Parameter

    Example

    Description

    VPC

    vpc-cce

    Select a VPC for the cluster.

    If no VPC is available, click Create VPC to create one. After the VPC is created, click the refresh icon. For details about how to create a VPC, see Creating a VPC and Subnet.

    Node Subnet

    subnet-cce

    Select a subnet. Nodes in the cluster are assigned with the IP addresses in the subnet.

    Network Model

    VPC network

    Select VPC network or Tunnel network. By default, the VPC network model is selected.

    For details about the differences between different container network models, see Container Network.

    Container CIDR Block

    10.0.0.0/16

    Configure the CIDR block used by containers. It controls how many pods can run in the cluster.

    Service CIDR Block

    10.247.0.0/16

    Configure the ClusterIP CIDR block for the cluster. It controls how many Services can be created in the cluster and cannot be changed after configuration.

  4. Click Next: Select Add-on. On the page displayed, select the add-ons to be installed during cluster creation.

    This example only includes the mandatory add-ons that are automatically installed.

  5. Click Next: Add-on Configuration. There is no need to set up the add-ons that are installed by default.
  6. Click Next: Confirm configuration, confirm the resources on the page displayed, and click Submit.

    Wait until the cluster is created. It takes about 5 to 10 minutes to create a cluster.

    The created cluster will be displayed on the Clusters page, and there are zero nodes in it.

    Figure 1 Cluster created

Step 3: Create a Node Pool and Nodes in the Cluster

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Nodes. On the Node Pools tab, click Create Node Pool in the upper right corner.
  3. Configure the node pool parameters.

    Only mandatory parameters are described in this example. You can keep the default values for most other parameters. For details about the configuration parameters, see Creating a Node Pool.

    Parameter

    Example

    Description

    Node Type

    Elastic Cloud Server (VM)

    Select a node type based on service requirements. Then, the available node flavors will be automatically displayed in the Specifications area for you to select.

    Specifications

    4 vCPUs | 8 GiB

    Select a node flavor that best fits your service needs.

    For optimal performance of the cluster components, you are advised to set up the node with a minimum of 4 vCPUs and 8 GiB of memory.

    Container Engine

    containerd

    Select a container engine based on service requirements. For details about the differences between container engines, see Container Engines.

    OS

    Huawei Cloud EulerOS 2.0

    Select an OS for the node.

    Login Mode

    A custom password

    • Password: Enter a password for logging in to the node and confirm the password. The default username is root.

      Keep the password secure. If you forget the password, the system is unable to retrieve it.

    • Key Pair: Select a key pair for logging to the node and select the check box to acknowledge that you have obtained the key file and without this file you will not be able to log in to the node.

      A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create Key Pair to create one. For details, see Creating a Key Pair on the Management Console.

  4. Configure parameters in Storage Settings and Network Settings. In this example, you can keep the default values for the parameters. You only need to select I have confirmed that the security group rules have been correctly configured for nodes to communicate with each other. and click Next: Confirm.

  5. Check the node specifications, read the instructions on the page, and click Submit.
  6. Locate the row containing the target node pool and click Scaling. There are zero nodes in the created node pool by default.

  7. Set the number of nodes to be added to 2, which means two more nodes will be created in the node pool.

  8. Wait until the nodes are created. It takes about 5 to 10 minutes to complete the node creation.

Step 4: Access the Cluster Using Kubectl

NOTICE:

You need to create an ECS bound with an EIP in the same VPC as the cluster first.

  1. Install kubectl on the ECS.

    You can check whether kubectl has been installed by running kubectl version. If kubectl has been installed, you can skip this step.

    The Linux environment is used as an example to describe how to install and configure kubectl. For more installation methods, see kubectl.

    1. Download kubectl.
      cd /home
      curl -LO https://dl.k8s.io/release/{v1.29.0}/bin/linux/amd64/kubectl

      {v1.29.0} specifies the version. You can replace it as required.

    2. Install kubectl.
      chmod +x kubectl
      mv -f kubectl /usr/local/bin

  2. Configure a credential for kubectl to access the Kubernetes cluster.

    1. Log in to the CCE console and click the cluster name to access the cluster console. Choose Overview in the navigation pane.
    2. On the cluster overview page, locate the Connection Info area. Click Configure next to kubectl and view the kubectl connection information.
    3. In the window that slides out from the right, locate the Download the kubeconfig file. area, select Intranet access for Current data, and download the corresponding configuration file.
    4. Log in to the VM where the kubectl client has been installed and copy and paste the configuration file (for example, kubeconfig.yaml) downloaded in the previous step to the /home directory.
    5. Save the kubectl authentication file to the configuration file in the $HOME/.kube directory.
      cd /home
      mkdir -p $HOME/.kube
      mv -f kubeconfig.yaml $HOME/.kube/config
    6. Run the kubectl command to see whether the cluster can be accessed.

      For example, to view the cluster information, run the following command:

      kubectl cluster-info

      Information similar to the following is displayed:

      Kubernetes master is running at https://*.*.*.*:5443
      CoreDNS is running at https://*.*.*.*:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Step 5: Install Helm

This section uses Helm v3.7.0 as an example. If other versions are needed, see Helm.

  1. Download the Helm client to a VM in a cluster.

    wget https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz

  2. Decompress the Helm package.

    tar -xzvf helm-v3.7.0-linux-amd64.tar.gz

  3. Copy and paste Helm to the system path, for example, /usr/local/bin/helm.

    mv linux-amd64/helm /usr/local/bin/helm

  4. Check the Helm version.

    helm version
    version.BuildInfo{Version:"v3.7.0",GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b",GitTreeState:"clean",GoVersion:"g01.16.8"}

Step 6: Deploy the Template

This section uses the WordPress template as an example.

  1. Add the official WordPress repository.

    helm repo add bitnami https://charts.bitnami.com/bitnami

  2. Run the following commands to create a WordPress workload:

    helm install myblog bitnami/wordpress \
        --set mariadb.primary.persistence.enabled=true \
        --set mariadb.primary.persistence.storageClass=csi-disk \
        --set mariadb.primary.persistence.size=10Gi \
        --set persistence.enabled=false

    The custom instance name is specified by myblog. The remaining parameters serve the following functions:

    • Persistent storage volumes are used by the MariaDB database that is connected to WordPress to store data. StorageClass is used to automatically create persistent storage. The EVS disk type (csi-disk) is used, with a size of 10GiB.
    • WordPress requires no data persistence, so you can set persistence.enabled to false for the PV.

    The command output is as follows:

    coalesce.go:223: warning: destination for mariadb.networkPolicy.egressRules.customRules is a table. Ignoring non-table value ([])
    NAME: myblog
    LAST DEPLOYED: Mon Mar 27 11:47:58 2023
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    CHART NAME: wordpress
    CHART VERSION: 15.2.57
    APP VERSION: 6.1.1
    
    ** Be patient while the chart is being deployed.**
    
    Your WordPress site can be accessed through the following DNS name from within your cluster:
    
        myblog-wordpress.default.svc.cluster.local (port 80)
    
    To access your WordPress site from outside the cluster, follow the steps below:
    
    1. Get the WordPress URL by running these commands:
    
      NOTE: It may take a few minutes for the LoadBalancer IP to be available.
            Watch the status with: 'kubectl get svc --namespace default -w myblog-wordpress'
    
       export SERVICE_IP=$(kubectl get svc --namespace default myblog-wordpress --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
       echo "WordPress URL: http://$SERVICE_IP/"
       echo "WordPress Admin URL: http://$SERVICE_IP/admin"
    
    2. Open a browser and access WordPress using the obtained URL.
    
    3. Log in with the following credentials below to see your blog:
    
      echo Username: user
      echo Password: $(kubectl get secret --namespace default myblog-wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)

Step 7: Access WordPress

  1. Modify the WordPress Service configuration.

    To use a LoadBalancer Service in CCE, you need to configure it with additional annotations. Unfortunately, bitnami/wordpress does not come with this configuration, so you will have to modify it manually.

    kubectl edit svc myblog-wordpress

    Add kubernetes.io/elb.autocreate and kubernetes.io/elb.class to metadata.annotations and save the changes. These two annotations are used to create a shared load balancer, which allows access to the WordPress workload via the EIP of the load balancer.

    apiVersion: v1
    kind: Service
    metadata:
      name: myblog-wordpress
      namespace: default
      annotations:
        kubernetes.io/elb.autocreate: '{ "type": "public", "bandwidth_name": "myblog-wordpress", "bandwidth_chargemode": "bandwidth", "bandwidth_size": 5, "bandwidth_sharetype": "PER", "eip_type": "5_bgp" }'
        kubernetes.io/elb.class: union
    spec:
      ports:
        - name: http
    ...

  2. Check the Service.

    kubectl get svc

    If information similar to the following is displayed, the workload's access mode has been configured. You can use the LoadBalancer Service to access the WordPress workload from the Internet. **.**.**.** specifies the EIP of the load balancer, and 80 indicates the access port.

    NAME               TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    kubernetes         ClusterIP      10.247.0.1       <none>        443/TCP          3d
    myblog-mariadb     ClusterIP      10.247.202.20    <none>        3306/TCP         8m
    myblog-wordpress   LoadBalancer   10.247.130.196   **.**.**.**   80:31540/TCP   8m

  3. Access WordPress.

    • To access the WordPress web page: In the address box of a browser, enter <EIP of the load balancer>:80.

    • To access the WordPress management console:
      1. Run the following command to obtain the password of user:
        kubectl get secret --namespace default myblog-wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d
      2. In the address box of a browser, enter <EIP of the load balancer>:80/login to access the WordPress backend. The user name is user, and the password is the character string obtained in the previous step.

Follow-up Operations: Releasing Resources

To avoid additional charges, make sure to release resources promptly if you no longer require the cluster. For details, see Deleting a Cluster.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback