Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Deploying the Jenkins Master in the Cluster

Updated on 2025-01-03 GMT+08:00

Deploy the Jenkins master as a Deployment in the CCE Autopilot cluster to manage jobs.

NOTE:

The Jenkins version used in this example is 2.440.2. The strings on the Jenkins page may vary depending on the version. The screenshots are for reference only.

Preparations

Procedure

  1. Log in to the ECS. For details, see Logging In to a Linux ECS Using CloudShell.
  2. Create a PV and PVC of the SFS Turbo type for the Jenkins master to store persistent data.

    1. Create a YAML file named pv-jenkins-master.yaml for creating a PV. You can change the file name as needed.
      NOTE:

      A Linux file name is case sensitive and can contain letters, digits, underscores (_), and hyphens (-), but cannot contain slashes (/) or null characters (\0). To improve compatibility, do not use special characters, such as spaces, question marks (?), and asterisks (*).

      vim pv-jenkins-master.yaml

      The file content is as follows. In this example, only mandatory parameters are involved. For more parameters, see Using an Existing SFS Turbo File System Through a Static PV.

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: everest-csi-provisioner  # Storage driver. The value is fixed to everest-csi-provisioner.
        name: pv-jenkins-master    # PV name. You can change the name.
      spec:
        accessModes:
        - ReadWriteMany     # Access mode. The value must be ReadWriteMany for SFS Turbo.
        capacity:
          storage: 500Gi     # Requested PV capacity.
        csi:
          driver: sfsturbo.csi.everest.io    # Storage driver that the mounting depends on. The value is fixed to sfsturbo.csi.everest.io.
          fsType: nfs        # Storage type. The value is fixed to nfs.
          volumeHandle: ea8a59b6-485c-xxx    # SFS Turbo volume ID
          volumeAttributes:
           everest.io/share-export-location: ea8a59b6-485c-xxx.sfsturbo.internal:/     # Shared path of the SFS Turbo volume
        persistentVolumeReclaimPolicy: Retain    # Reclaim policy.
        storageClassName: csi-sfsturbo           # StorageClass name of the SFS Turbo volume.

      Press Esc to exit editing mode and enter :wq to save the file.

      Table 1 Descriptions of key parameters

      Parameter

      Example Value

      Description

      name

      pv-jenkins-master

      Indicates the PV name. You can use any name.

      The name can contain 1 to 64 characters and cannot start or end with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

      accessModes

      ReadWriteMany

      Indicates the access mode. For SFS Turbo, the value is fixed to ReadWriteMany.

      storage

      500Gi

      Indicates the requested PV capacity, in Gi.

      volumeHandle

      ea8a59b6-485c-xxx

      Specifies the ID of an SFS Turbo volume.

      How to obtain: On the CCE console, click in the upper left corner and choose Storage > Scalable File Service. In the navigation pane, choose SFS Turbo > File Systems. In the list, click the name of the target SFS Turbo file system. On the details page, copy the content following ID.

      everest.io/share-export-location

      ea8a59b6-485c-xxx.sfsturbo.internal:/

      Specifies the shared path of the SFS Turbo volume. Multiple pods can access the path through the network to share the same storage resource.

      How to obtain: On the CCE console, click in the upper left corner and choose Storage > Scalable File Service. In the navigation pane, choose SFS Turbo > File Systems. In the list, click the name of the target SFS Turbo file system. On the details page, copy the content following Shared Path.

      persistentVolumeReclaimPolicy

      Retain

      Indicates the PV reclamation policy. Only the Retain policy is supported.

      Retain: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After a PVC is deleted, the PV resource is in the Released state and cannot be bound to the PVC again.

      storageClassName

      csi-sfsturbo

      Specifies the StorageClass name of an SFS Turbo volume.

      In this example, the built-in StorageClass is used and its name is fixed to csi-sfsturbo.

    2. Run the following command to create a PV:
      kubectl create -f pv-jenkins-master.yaml

      If the following information is displayed, the PV named pv-jenkins-master has been created:

      persistentvolume/pv-jenkins-master created
    3. Create a YAML file named pvc-jenkins-master.yaml for creating a PVC. You can change the file name as needed.
      vim pvc-jenkins-master.yaml

      The file content is as follows. In this example, only mandatory parameters are involved. For more parameters, see Using an Existing SFS Turbo File System Through a Static PV.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-jenkins-master    # PVC name. You can change the name.
        namespace: default    # Namespace. This is also the namespace of the workload.
        annotations:
          volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner  # Storage driver. The value is fixed to everest-csi-provisioner.
      spec:
        accessModes:
        - ReadWriteMany                 # Access mode. The value must be ReadWriteMany for SFS Turbo.
        resources:
          requests:
            storage: 500Gi                  # Requested capacity of the PVC, which must be the same as the PV capacity.
        storageClassName: csi-sfsturbo       # StorageClass name of the SFS Turbo file system, which must be the same as that of the PV.
        volumeName: pv-jenkins-master       # Name of the associated PV.

      Press Esc to exit editing mode and enter :wq to save the file.

      Table 2 Descriptions of key parameters

      Parameter

      Example Value

      Description

      name

      pvc-jenkins-master

      Indicates the PVC name. You can use any name.

      The name can contain 1 to 64 characters and cannot start or end with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

      namespace

      default

      Indicates the namespace, which must be the same as the namespace of the workload.

      accessModes

      ReadWriteMany

      Indicates the access mode. For SFS Turbo, the value is fixed to ReadWriteMany.

      storage

      500Gi

      Indicates the requested PVC capacity, in Gi.

      The value must be the same as the PV capacity requested in 2.a.

      storageClassName

      csi-sfsturbo

      Indicates the StorageClass name.

      The value must be the same as the StorageClass of the PV in 2.a.

      volumeName

      pv-jenkins-master

      Specifies the name of the associated PV.

      The value must be the same as the PV name in 2.a.

    4. Run the following command to create a PVC:
      kubectl create -f pvc-jenkins-master.yaml

      If the following information is displayed, the PVC named pvc-jenkins-master has been created:

      persistentvolumeclaim/pvc-jenkins-master created
    5. Verify that the PV has been bound to the PVC. After the PV and PVC are created, they are automatically bound. The PVC can be mounted to the pod only after the binding is successful. When both the PV and PVC are in the Bound state, the PV has been bound to the PVC.
      Run the following command to check the PV status:
      kubectl get pv

      If the value of STATUS is Bound, the PV is bound.

      NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
      pv-jenkins-master   500Gi      RWX            Retain           Bound    default/pvc-jenkins-master   csi-sfsturbo            88s
      Run the following command to check the PVC status:
      kubectl get pvc

      If the value of STATUS is Bound, the PVC is bound.

      NAME                 STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      pvc-jenkins-master   Bound    pv-jenkins-master   500Gi      RWX            csi-sfsturbo   61s

      When both the PV and PVC are in the Bound state, the PV has been bound to the PVC.

  3. Use the jenkins/jenkins:lts image to create a Deployment named jenkins-master and mount the PVC created in 2.d.

    NOTE:

    In this example, the jenkins/jenkins:lts image (Docker image of the Jenkins LTS version) is used. The LTS version is a long-term release provided by Jenkins. It is relatively stable and will receive security updates and bug fixes for a longer time. It is suitable for production systems that require a stable environment. For more information, see LTS Release Line.

    In this example, the Jenkins master is deployed as a Deployment. The Jenkins master is mainly used to manage and schedule jobs and does not depend on persistent data. Deploying the Jenkins master as a Deployment can improve system flexibility and scalability.

    You can select different images and workload types as required.

    1. Create a YAML file named jenkins-master for creating the jenkins-master workload. You can change the file name as needed.
      vim jenkins-master.yaml

      The file content is as follows. In this example, only mandatory parameters are involved. For details about more parameters, see Creating a Deployment.

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: jenkins-master    # Name of the Deployment.
        namespace: default     # Namespace, which must be the same as the name of the PVC.
      spec:
        replicas: 1       # Number of pods running the Deployment.
        selector:
          matchLabels:    # Workload label selector, which is used to match the selected pod to ensure that the required pod can be selected for the Deployment.
            app: jenkins-master
        template:
          metadata:
            labels:       # Pod label, which must be the same as the value of matchLabels of the workload to ensure that the pod running the Deployment can be managed in a unified manner.
              app: jenkins-master
          spec:
            containers:
              - name: container-1
                image: jenkins/jenkins:lts   # The jenkins/jenkins:lts image is used.
                resources:      # Used to configure the resource limit and request of the container
                  limits:        # Maximum number of resources that can be used by the container
                    cpu: '4'
                    memory: 4Gi
                  requests:      # Resources required for starting the container
                    cpu: '4'
                    memory: 4Gi
                volumeMounts:    # Volume mounted to the container
                  - name: pvc-jenkins-master
                    mountPath: /var/jenkins_home   # Mount path. Generally, the value is /var/jenkins_home.
            volumes:    # Storage volume used by the pod, which corresponds to the created PVC.
              - name: pvc-jenkins-master    # Volume name. You can change the name.
                persistentVolumeClaim:
                  claimName: pvc-jenkins-master    # The PVC to be used
            imagePullSecrets:    
              - name: default-secret

      Press Esc to exit editing mode and enter :wq to save the file.

    2. Run the following command to create a Deployment named jenkins-master:
      kubectl create -f jenkins-master.yaml

      Information similar to the following will be displayed:

      deployment/jenkins-master created
    3. To ensure that the Deployment is created, check whether the pod created for the workload is in the Running state.
      kubectl get pod

      If STATUS of the pod whose name is jenkins-master-xxx is Running, the Deployment has been created.

      NAME                              READY   STATUS    RESTARTS   AGE
      jenkins-master-6f65c7b8f7-255gn   1/1     Running   0          72s

  4. Create Services for accessing the Jenkins master.

    The Jenkins container image has two ports: 8080 and 50000. You need to configure them separately. Port 8080 is used for web login, and port 50000 is used for the connection between the Jenkins master and Jenkins agent. In this example, two Services need to be created. For details, see Table 3.
    NOTE:

    In this example, the Jenkins agent created in the subsequent steps is in the same cluster as the Jenkins master. Therefore, the Jenkins agent uses the ClusterIP Service to connect to the Jenkins master.

    When the Jenkins web page needs to communicate with the Jenkins agent, port 8080 must be opened for the Jenkins agent. In this example, both ports 8080 and 50000 are opened for the ClusterIP Service.

    If the Jenkins agent needs to connect to the Jenkins master across clusters or over the public network, select an appropriate Service type.

    Table 3 Service

    Service Type

    Function

    Basic Parameters

    LoadBalancer

    Allows access to the web from the public network.

    • Service name: jenkins-web (You can change the name if needed.)
    • Container port: 8080
    • Access port: 8080

    ClusterIP

    Used by the Jenkins agent to connect to the Jenkins master

    • Service name: jenkins-agent (You can change the name if needed.)
    • Container port 1: 8080
    • Access port 1: 8080
    • Container port 2: 50000
    • Access port 2: 50000
    1. Create a YAML file named jenkins-web to create a LoadBalancer Service. You can change the file name as needed.
      This example describes how to create a Service using an automatically created load balancer. If you want to use an existing load balancer, see Using kubectl to Create a Service (Using an Existing Load Balancer).
      vim jenkins-web.yaml

      The file content is as follows. In this example, only mandatory parameters are involved. For more parameters, see Using kubectl to Automatically Create a Load Balancer.

      apiVersion: v1
      kind: Service
      metadata:
        name: jenkins-web    # Service name. You change the name as needed.
        namespace: default   # Namespace of the Service.
        labels:
          app: jenkins-web   # Label of the Service.
        annotations:     #Automatic creation of a load balancer
          kubernetes.io/elb.class: performance    # Load balancer type. Only dedicated load balancers are supported.
          kubernetes.io/elb.autocreate: '{
            "type": "public",
            "bandwidth_name": "cce-bandwidth-xxx",
            "bandwidth_chargemode": "traffic",
            "bandwidth_size": 5,
            "bandwidth_sharetype": "PER",
            "eip_type": "5_bgp",
            "available_zone": ["cn-east-3a"
            ],
            "l4_flavor_name": "L4_flavor.elb.s1.small"
          }'
      spec:
        selector:    # Used to select the matched pod.
          app: jenkins-master
        ports:       # Service port information.
        - name: cce-service-0
          targetPort: 8080   # Port used by the Service to access the target pod. This port is closely related to the application running in the pod.
          port: 8080 # Port for accessing the Service. It is also the listening port of the load balancer.
          protocol: TCP
        type: LoadBalancer   # Service type. In this example, this is a LoadBalancer Service.

      Press Esc to exit editing mode and enter :wq to save the file.

      Table 4 Key parameters in the kubernetes.io/elb.autocreate field

      Parameter

      Example Value

      Description

      type

      public

      Indicates the network type of the load balancer.

      • public: indicates a public network load balancer with an EIP bound to allow access from both public and private networks.
      • inner: indicates a private network load balancer, which does not need an EIP and can be accessed only over a private network.

      The Service is used to provide external web access, so set this parameter to public.

      bandwidth_name

      cce-bandwidth-xxx

      Specifies the bandwidth name. The default value is cce-bandwidth-xxx, where xxx can be changed as needed.

      The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed.

      bandwidth_chargemode

      traffic

      Indicates the bandwidth billing option.

      • bandwidth: You are billed by a fixed bandwidth.
      • traffic: You are billed based on the traffic you actually use.

      bandwidth_size

      5

      Indicates the bandwidth. The default value is 1 Mbit/s to 2,000 Mbit/s. Configure this parameter based on the bandwidth allowed in your region.

      The minimum increment for modifying the bandwidth varies depending on the allowed bandwidth. You can only select an integer multiple of the minimum increment.
      • The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s.
      • The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1,000 Mbit/s.
      • The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1,000 Mbit/s.

      bandwidth_sharetype

      PER

      Specifies the bandwidth type. The only value PER indicates a dedicated bandwidth.

      eip_type

      5_bgp

      Specifies the EIP type.

      • 5_bgp: Dynamic BGP
      • 5_sbgp: Static BGP

      available_zone

      cn-east-3a

      Specifies the AZs where the load balancer is located. This parameter is only available for dedicated load balancers.

      You can obtain all supported AZs by getting the AZ list.

      l4_flavor_name

      L4_flavor.elb.s1.small

      Specifies the flavor name of the Layer 4 load balancer. This parameter is only available for dedicated load balancers.

      You can obtain all supported types by getting the flavor list.

    2. Run the following command to create a LoadBalancer Service to provide external web access:
      kubectl create -f jenkins-web.yaml

      Information similar to the following will be displayed:

      service/jenkins-web created
    3. Create a YAML file named jenkins-agent to create a ClusterIP Service. You can change the file name as needed.
      vim jenkins-agent.yaml

      The file content is as follows. In this example, only mandatory parameters are involved. For details about more parameters, see ClusterIP.

      apiVersion: v1
      kind: Service
      metadata:
        name: jenkins-agent      # Service name. You change the name as needed.
        namespace: default      # Namespace of the Service.
        labels:
          app: jenkins-agent
      spec:
        ports:    # Service port information.
        - name: service0         # Port 1: used to ensure that the external access address of the web is the same as the Jenkins agent access address.
          port: 8080              # Port for accessing a Service.
          protocol: TCP           # Protocol used for accessing a Service. The value can be TCP or UDP.
          targetPort: 8080       # Port used by the Service to access the target container. This port is closely related to the application running in a container.
        - name: service1          #Port 2: used for the connectivity between the Jenkins master and Jenkins agent.
          port: 50000             
          protocol: TCP           
          targetPort: 50000       
        selector:                 # Label selector. A Service selects a pod based on the label and forwards the requests for accessing the Service to the pod.
          app: jenkins-master
        type: ClusterIP           # Type of a Service. ClusterIP indicates that a Service is only reachable from within the cluster.

      Press Esc to exit editing mode and enter :wq to save the file.

    4. Run the following command to create a ClusterIP Service for the Jenkins agent to connect to the Jenkins master:
      kubectl create -f jenkins-agent.yaml

      Information similar to the following will be displayed:

      service/jenkins-agent created
    5. Check whether the Services are successfully created.
      kubectl get svc

      The following information is displayed. You can log in to Jenkins using {EIP of the public network load balancer}:{8080}.

      NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                   PORT(S)              AGE
      jenkins-agent   ClusterIP      10.247.22.139   <none>                        8080/TCP,50000/TCP   34s
      jenkins-web     LoadBalancer   10.247.76.78    xx.xx.xx.xx,192.168.0.239     8080:31694/TCP       15m
      kubernetes      ClusterIP      10.247.0.1      <none>                        443/TCP              3h3m

  5. Log in to and initialize Jenkins.

    1. In the address box of the browser, enter {EIP of the public network load balancer}:{8080} to open the Jenkins configuration page.
    2. Obtain the initial administrator password from the Jenkins pod upon the first login.
      1. Return to the ECS and run the following command to query the pod name:
        kubectl get pod|grep jenkins-master

        The following information is displayed. jenkins-master-6f65c7b8f7-255gn indicates the pod name.

        jenkins-master-6f65c7b8f7-255gn   1/1     Running   0          144m
      2. Run the following command to enter the pod (jenkins-master-6f65c7b8f7-255gn):
        kubectl exec -it jenkins-master-6f65c7b8f7-255gn -- /bin/sh
      3. Run the following command to obtain the initial administrator password:
        cat /var/jenkins_home/secrets/initialAdminPassword
    3. Install the recommended add-ons and create an administrator as prompted upon the first login. After the initial configuration is complete, the Jenkins web page is displayed.
      Figure 1 Jenkins web page

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback