Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page
Help Center/ ModelArts/ ModelArts User Guide (Lite Cluster)/ Using Lite Cluster Resources/ Using Snt9B for Distributed Training in a Lite Cluster Resource Pool

Using Snt9B for Distributed Training in a Lite Cluster Resource Pool

Updated on 2024-12-31 GMT+08:00

Description

This case guides you through distributed training on Snt9B. By default, Lite Cluster resource pools come with the volcano scheduler, which delivers training jobs to clusters in volcano job mode. The BERT NLP model is used in the training test cases.

Figure 1 Delivering training jobs

Procedure

  1. Pull the image. The test image is bert_pretrain_mindspore:v1, which contains the test data and code.

    docker pull swr.cn-southwest-2.myhuaweicloud.com/os-public-repo/bert_pretrain_mindspore:v1
    docker tag swr.cn-southwest-2.myhuaweicloud.com/os-public-repo/bert_pretrain_mindspore:v1 bert_pretrain_mindspore:v1

  2. Create the config.yaml file on the host.

    Configure Pods using this file. For debugging, start a Pod with the sleep command. Alternatively, replace the command with the boot command for your job (for example, python train.py). The job will run once the container starts.

    The file content is as follows:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: configmap1980-yourvcjobname     #The prefix is configmap1980-, followed by the vcjob name.
      namespace: default                      #Namespace, which is optional and must be in the same namespace as vcjob.
      labels:
        ring-controller.cce: ascend-1980   # Retain the default settings.
    data:            # The data content remains unchanged. After the initialization is complete, the data content is automatically modified by the Volcano plug-in.
      jobstart_hccl.json: |
        {
            "status":"initializing"
        }
    ---
    apiVersion: batch.volcano.sh/v1alpha1   # The value cannot be changed. The volcano API must be used.
    kind: Job                               # Only the job type is supported at present.
    metadata:
      name: yourvcjobname                  # Job name, which must be the same as that in configmap.
    namespace: default             # The value must be the same as that of ConfigMap.
      labels:
        ring-controller.cce: ascend-1980        # Retain the default settings.
        fault-scheduling: "force"
    spec:
      minAvailable: 1                       # The value of minAvailable is 1 in a single-node scenario and N in an N-node distributed scenario.
    schedulerName: volcano         # Retain the default settings. Use the Volcano scheduler to schedule jobs.
      policies:
        - event: PodEvicted
          action: RestartJob
      plugins:
        configmap1980:
        - --rank-table-version=v2           # Retain the default settings. The ranktable file of the v2 version is generated.
        env: []
        svc:
        - --publish-not-ready-addresses=true
      maxRetry: 3
      queue: default
      tasks:
      - name: "yourvcjobname-1"
        replicas: 1                              # The value of replicas is 1 in a single-node scenario and N in an N-node scenario. The number of NPUs in the requests field is 8 in an N-node scenario.
        template:
          metadata:
            labels:
              app: mindspore
    ring-controller.cce: ascend-1980       # Retain the default value. The value must be the same as the label in ConfigMap and cannot be changed.
          spec:
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: volcano.sh/job-name
                          operator: In
                          values:
                            - yourvcjobname
                    topologyKey: kubernetes.io/hostname
            containers:
    - image: bert_pretrain_mindspore:v1        # Training framework image path, which can be modified.
              imagePullPolicy: IfNotPresent
              name: mindspore
              env:
              - name: name                               # The value must be the same as that of Jobname.
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: ip                                       # IP address of the physical node, which is used to identify the node where the pod is running
                valueFrom:
                  fieldRef:
                    fieldPath: status.hostIP
              - name: framework
                value: "MindSpore"
              command:
              - "sleep"
              - "1000000000000000000"
              resources:
                requests:
    huawei.com/ascend-1980: "1"      # Number of required NPUs. The maximum value is 16. You can add lines below to configure resources such as memory and CPU. The key remains unchanged.
                limits:
    huawei.com/ascend-1980: "1"        # Limits the number of cards. The key remains unchanged. The value must be consistent with that in requests.
              volumeMounts:
              - name: ascend-driver               # Mount driver. Retain the settings.
                mountPath: /usr/local/Ascend/driver
              - name: ascend-add-ons           # Mount driver. Retain the settings.
                mountPath: /usr/local/Ascend/add-ons
              - name: localtime
                mountPath: /etc/localtime
              - name: hccn                             #  HCCN configuration of the driver. Retain the settings.
                mountPath: /etc/hccn.conf
              - name: npu-smi                             #npu-smi
                mountPath: /usr/local/sbin/npu-smi
            nodeSelector:
              accelerator/huawei-npu: ascend-1980
            volumes:
            - name: ascend-driver
              hostPath:
                path: /usr/local/Ascend/driver
            - name: ascend-add-ons
              hostPath:
                path: /usr/local/Ascend/add-ons
            - name: localtime
              hostPath:
                path: /etc/localtime                      # Configure the Docker time.
            - name: hccn
              hostPath:
                path: /etc/hccn.conf
            - name: npu-smi
              hostPath:
                path: /usr/local/sbin/npu-smi
            restartPolicy: OnFailure

  3. Create a pod based on the config.yaml file.

    kubectl apply -f config.yaml

  4. Run the following command to check the pod startup status. If 1/1 running is displayed, the startup is successful.

    kubectl get pod -A

  5. Go to the container, replace {pod_name} with your pod name (displayed by the get pod command), and replace {namespace} with your namespace (default).

    kubectl exec -it {pod_name} bash -n {namespace}

  6. Run the following command to view the NPU information:

    npu-smi info

    Kubernetes allocates resources to pods according to the number of NPUs specified in the config.yaml file. As illustrated in the figure below, only one NPU is displayed in the container, reflecting the single NPU configuration. This confirms that the configuration is effective.

    Figure 2 Viewing NPU information

  7. Change the number of NPUs in the pod. In this example, distributed training is used. The number of required NPUs is changed to 8.

    Delete the created pod.
    kubectl delete -f config.yaml
    Change the values of limit and request in the config.yaml file to 8.
    vi config.yaml
    Figure 3 Modify the number of NPUs

    Re-create a pod.

    kubectl apply -f config.yaml
    Go to the container and view the NPU information. Replace {pod_name} with your pod name and {namespace} with your namespace (default).
    kubectl exec -it {pod_name} bash -n {namespace}
    npu-smi info

    As shown in the following figure, 8 NPUs are used and the pod is successfully configured.

    Figure 4 Viewing NPU information

  8. Run the following command to view the inter-NPU communication configuration file:

    cat /user/config/jobstart_hccl.json

    During multi-NPU training, the rank_table_file configuration file is essential for inter-NPU communication. This file is automatically generated and provides the file address once the pod is initiated. It takes a period of time to generate the /user/config/jobstart_hccl.json and /user/config/jobstart_hccl.json configuration files. The service process can generate the inter-NPU communication information only after the status field in /user/config/jobstart_hccl.json is completed. The process is shown in the figure below.

    Figure 5 Inter-NPU communication configuration file

  9. Start a training job.

    cd /home/ma-user/modelarts/user-job-dir/code/bert/
    export MS_ENABLE_GE=1
    export MS_GE_TRAIN=1
    python scripts/ascend_distributed_launcher/get_distribute_pretrain_cmd.py --run_script_dir ./scripts/run_distributed_pretrain_ascend.sh --hyper_parameter_config_dir ./scripts/ascend_distributed_launcher/hyper_parameter_config.ini --data_dir /home/ma-user/modelarts/user-job-dir/data/cn-news-128-1f-mind/ --hccl_config /user/config/jobstart_hccl.json --cmd_file ./distributed_cmd.sh
    bash scripts/run_distributed_pretrain_ascend.sh /home/ma-user/modelarts/user-job-dir/data/cn-news-128-1f-mind/ /user/config/jobstart_hccl.json
    Figure 6 Starting a training job

    It takes some time to load a training job. After several minutes, run the following command to view the NPU information. As shown in the following figure, all the eight NPUs are occupied, indicating that the training task is in progress.

    npu-smi info
    Figure 7 Viewing NPU information

    To stop a training task, run the commands below:

    pkill -9 python
    ps -ef
    Figure 8 Stopping the training process
    NOTE:

    Set limit and request to proper values to restrict the number of CPUs and memory size. A single Snt9B node is equipped with eight Snt9B cards and 192u1536g. Properly plan the CPU and memory allocations to avoid task failures due to insufficient CPU and memory limits.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback