Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Creating a Custom Training Image (PyTorch + CPU/GPU)

Updated on 2024-12-26 GMT+08:00

This section describes how to create an image and use the image for training on the ModelArts platform. The AI engine used for training is PyTorch, and the resources are CPUs or GPUs.

NOTE:

This section applies only to training jobs of the new version.

Scenarios

In this example, write a Dockerfile to create a custom image on a Linux x86_64 server running Ubuntu 18.04.

Objective: Build and install container images of the following software and use the images and CPUs/GPUs for training on ModelArts.

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • pytorch-1.8.1

Procedure

Before using a custom image to create a training job, get familiar with Docker and have development experience. The following is the detailed procedure:

  1. Prerequisites
  2. Step 1 Creating an OBS Bucket and Folder
  3. Step 2 Preparing the Training Script and Uploading It to OBS
  4. Step 3 Preparing a Host
  5. Step 4 Creating a Custom Image
  6. Step 5 Uploading an Image to SWR
  7. Step 6 Creating a Training Job on ModelArts

Prerequisites

You have registered a Huawei ID and enabled Huawei Cloud services, and the account is not in arrears or frozen.

Step 1 Creating an OBS Bucket and Folder

Create a bucket and folders in OBS for storing the sample dataset and training code. Table 1 lists the folders to be created. Replace the bucket name and folder names in the example with actual names.

For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.

Ensure that the OBS directory you use and ModelArts are in the same region.

Table 1 Folder to create

Name

Description

obs://test-modelarts/pytorch/demo-code/

Stores the training script.

obs://test-modelarts/pytorch/log/

Stores training log files.

Step 2 Preparing the Training Script and Uploading It to OBS

Prepare the training script pytorch-verification.py and upload it to the obs://test-modelarts/pytorch/demo-code/ folder of the OBS bucket.

The pytorch-verification.py file contains the following information:

import torch
import torch.nn as nn

x = torch.randn(5, 3)
print(x)

available_dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
y = torch.randn(5, 3).to(available_dev)
print(y)

Step 3 Preparing a Host

Obtain a Linux x86_64 server running Ubuntu 18.04. Either an ECS or your local PC will do.

For details about how to purchase an ECS, see Purchasing and Logging In to a Linux ECS. Set CPU Architecture to x86 and Image to Public image. Ubuntu 18.04 images are recommended.

Step 4 Creating a Custom Image

Create a container image with the following configurations and use the image to create a training job on ModelArts:

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • pytorch-1.8.1

This section describes how to write a Dockerfile to create a custom image.

  1. Install Docker.

    The following uses the Linux x86_64 OS as an example to describe how to obtain the Docker installation package. For more details about how to install Docker, Run the following commands to install Docker:

    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh

    If the docker images command is executed, Docker has been installed. In this case, skip this step.

  2. Run the following command to check the Docker Engine version:
    docker version | grep -A 1 Engine
    The following information is displayed:
    ...
    Engine:
      Version:          18.09.0
    NOTE:

    Use the Docker engine of the preceding version or later to create a custom image.

  3. Create a folder named context.
    mkdir -p context
  4. Obtain the pip.conf file. In this example, the pip source provided by Huawei Mirrors is used, which is as follows:
    [global]
    index-url = https://repo.huaweicloud.com/repository/pypi/simple
    trusted-host = repo.huaweicloud.com
    timeout = 120
    NOTE:

    To obtain pip.conf, go to Huawei Mirrors at https://mirrors.huaweicloud.com/home and search for pypi.

  5. Download the torch*.whl files. Download the following .whl files from https://download.pytorch.org/whl/torch_stable.html:
    • torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    • torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    • torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
    NOTE:

    The URL code of the + symbol is %2B. When searching for a file in the above website, replace the + symbol in the file name with %2B.

    For example, torch-1.8.1%2Bcu111-cp37-cp37m-linux_x86_64.whl.

  6. Download the Miniconda3-py37_4.12.0-Linux-x86_64.sh installation file (Python 3.7.13) from https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh.
  7. Store the pip source file, torch*.whl file, and Miniconda3 installation file in the context folder, which is as follows:
    context
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    ├── pip.conf
    ├── torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    ├── torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    └── torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
  8. Write the container image Dockerfile.
    Create an empty file named Dockerfile in the context folder and copy the following content to the file:
    # The host must be connected to the public network for creating a container image.
    
    # Base container image at https://github.com/NVIDIA/nvidia-docker/wiki/CUDA
    # 
    # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
    # require Docker Engine >= 17.05
    #
    # builder stage
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04 AS builder
    
    # The default user of the base container image is root.
    # USER root
    
    # Use the PyPI configuration provided by Huawei Mirrors.
    RUN mkdir -p /root/.pip/
    COPY pip.conf /root/.pip/pip.conf
    
    # Copy the installation files to the /tmp directory in the base container image.
    COPY Miniconda3-py37_4.12.0-Linux-x86_64.sh /tmp
    COPY torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl /tmp
    
    # https://conda.io/projects/conda/en/latest/user-guide/install/linux.html#installing-on-linux
    # Install Miniconda3 to the /home/ma-user/miniconda3 directory of the base container image.
    RUN bash /tmp/Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/ma-user/miniconda3
    
    # Install torch*.whl using the default Miniconda3 Python environment in /home/ma-user/miniconda3/bin/pip.
    RUN cd /tmp && \
        /home/ma-user/miniconda3/bin/pip install --no-cache-dir \
        /tmp/torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    
    # Create the final container image.
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04
    
    # Install vim and cURL in Huawei Mirrors.
    RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        apt-get update && \
        apt-get install -y vim curl && \
        apt-get clean && \
        mv /etc/apt/sources.list.bak /etc/apt/sources.list
    
    # Add user ma-user (UID = 1000, GID = 100).
    # A user group whose GID is 100 of the base container image exists. User ma-user can directly use it.
    RUN useradd -m -d /home/ma-user -s /bin/bash -g 100 -u 1000 ma-user
    
    # Copy the /home/ma-user/miniconda3 directory from the builder stage to the directory with the same name in the current container image.
    COPY --chown=ma-user:100 --from=builder /home/ma-user/miniconda3 /home/ma-user/miniconda3
    
    # Configure the preset environment variables of the container image.
    # Set PYTHONUNBUFFERED to 1 to avoid log loss.
    ENV PATH=$PATH:/home/ma-user/miniconda3/bin \
        PYTHONUNBUFFERED=1
    
    # Set the default user and working directory of the container image.
    USER ma-user
    WORKDIR /home/ma-user

    For details about how to write a Dockerfile, see official Docker documents.

  9. Verify that the Dockerfile has been created. The following shows the context folder:
    context
    ├── Dockerfile
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    ├── pip.conf
    ├── torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    ├── torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    └── torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
  10. Create the container image. Run the following command in the directory where the Dockerfile is stored to build the container image pytorch:1.8.1-cuda11.1:
    1
    docker build . -t pytorch:1.8.1-cuda11.1
    
    The following log information displayed during image creation indicates that the image has been created.
    Successfully tagged pytorch:1.8.1-cuda11.1

Step 5 Uploading an Image to SWR

  1. Log in to the SWR console and select a region. It must share the same region with ModelArts. Otherwise, the image cannot be selected.
  2. Click Create Organization in the upper right corner and enter an organization name to create an organization. Customize the organization name. Replace the organization name deep-learning in subsequent commands with the actual organization name.
  3. Click Generate Login Command in the upper right corner to obtain the login command. In this example, the temporary login command is copied.
  4. Log in to the local environment as user root and enter the copied temporary login command.
  5. Upload the image to SWR.
    1. Run the following command to tag the uploaded image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker tag pytorch:1.8.1-cuda11.1 swr.{region-id}.{domain}/deep-learning/pytorch:1.8.1-cuda11.1
    2. Run the following command to upload the image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker push swr.{region-id}.{domain}/deep-learning/pytorch:1.8.1-cuda11.1
  6. After the image is uploaded, choose My Images in navigation pane on the left of the SWR console to view the uploaded custom images.

Step 6 Creating a Training Job on ModelArts

  1. Log in to the ModelArts management console and check whether access authorization has been configured for your account. For details, see Configuring Agency Authorization for ModelArts with One Click. If you have been authorized using access keys, clear the authorization and configure agency authorization.
  2. In the navigation pane on the left, choose Model Training > Training Jobs. The training job list is displayed by default.
  3. On the Create Training Job page, set required parameters and click Submit.
    • Created By: Custom algorithms
    • Boot Mode: Custom images
    • Image path: image created in Step 5 Uploading an Image to SWR.
    • Code Directory: directory where the boot script file is stored in OBS, for example, obs://test-modelarts/pytorch/demo-code/. The training code is automatically downloaded to the ${MA_JOB_DIR}/demo-code directory of the training container. demo-code (customizable) is the last-level directory of the OBS path.
    • Boot Command: /home/ma-user/miniconda3/bin/python ${MA_JOB_DIR}/demo-code/pytorch-verification.py. demo-code (customizable) is the last-level directory of the OBS path.
    • Resource Pool: Public resource pools
    • Resource Type: Select CPU or GPU.
    • Persistent Log Saving: enabled
    • Job Log Path: Set this parameter to the OBS path for storing training logs, for example, obs://test-modelarts/pytorch/log/.
  4. Check the parameters of the training job and click Submit.
  5. Wait until the training job is completed.

    After a training job is created, the operations such as container image downloading, code directory downloading, and boot command execution are automatically performed in the backend. Generally, the training duration ranges from dozens of minutes to several hours, depending on the training procedure and selected resources. After the training job is executed, the log similar to the following is output.

    Figure 1 Run logs of training jobs with GPU specifications

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback