Help Center> ModelArts> Best Practices> Model Training> Example: Creating a Custom Image for Training (PyTorch + CPU/GPU)
Updated on 2024-03-05 GMT+08:00

Example: Creating a Custom Image for Training (PyTorch + CPU/GPU)

This section describes how to create an image and use the image for training on the ModelArts platform. The AI engine used for training is PyTorch, and the resources are CPUs or GPUs.

This section applies only to training jobs of the new version.

Scenarios

In this example, create a custom image by writing a Dockerfile on a Linux x86_64 host running the Ubuntu 18.04 operating system.

Objective: Build and install container images of the following software and use the images and CPUs/GPUs for training on ModelArts.

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • pytorch-1.8.1

Procedure

Before using a custom image to create a training job, you need to be familiar with Docker and have development experience. The following is the detailed procedure:

  1. Prerequisites
  2. Step 1 Creating an OBS Bucket and Folder
  3. Step 2 Preparing the Training Script and Uploading It to OBS
  4. Step 3 Preparing a Host
  5. Step 4 Creating a Custom Image
  6. Step 5 Uploading an Image to SWR
  7. Step 6 Creating a Training Job on ModelArts

Prerequisites

You have registered a Huawei ID and enabled Huawei Cloud services, and the account is not in arrears or frozen.

Step 1 Creating an OBS Bucket and Folder

Create a bucket and folders in OBS for storing the sample dataset and training code. Table 1 lists the folders to be created. Replace the bucket name and folder names in the example with actual names.

For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.

Ensure that the OBS directory you use and ModelArts are in the same region.

Table 1 Folder to create

Name

Description

obs://test-modelarts/pytorch/demo-code/

Stores the training script.

obs://test-modelarts/pytorch/log/

Stores training log files.

Step 2 Preparing the Training Script and Uploading It to OBS

Prepare the training script pytorch-verification.py and upload it to the obs://test-modelarts/pytorch/demo-code/ folder of the OBS bucket.

The pytorch-verification.py file contains the following information:

import torch
import torch.nn as nn

x = torch.randn(5, 3)
print(x)

available_dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
y = torch.randn(5, 3).to(available_dev)
print(y)

Step 3 Preparing a Host

Obtain a Linux x86_64 server running Ubuntu 18.04. Either an ECS or your local PC will do.

For details about how to purchase an ECS, see Purchasing and Logging In to a Linux ECS. Select a public image. An Ubuntu 18.04 image is recommended.
Figure 1 Creating an ECS using a public image (x86)

Step 4 Creating a Custom Image

Create a container image with the following configurations and use the image to create a training job on ModelArts:

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • pytorch-1.8.1

This section describes how to write a Dockerfile to create a custom image.

  1. Install Docker.

    The following uses the Linux x86_64 OS as an example to describe how to obtain the Docker installation package. For more details about how to install Docker, see official Docker documents.

    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh

    If the docker images command is executed, Docker has been installed. In this case, skip this step.

  2. Run the following command to check the Docker Engine version:
    docker version | grep -A 1 Engine
    The following information is displayed:
    ...
    Engine:
      Version:          18.09.0

    Use the Docker engine of the preceding version or later to create a custom image.

  3. Create a folder named context.
    mkdir -p context
  4. Obtain the pip.conf file. In this example, the pip source provided by Huawei Mirrors is used, which is as follows:
    [global]
    index-url = https://repo.huaweicloud.com/repository/pypi/simple
    trusted-host = repo.huaweicloud.com
    timeout = 120

    In Huawei Mirrors https://mirrors.huaweicloud.com/home, search for pypi to obtain the pip.conf file.

  5. Download the following .whl files from https://download.pytorch.org/whl/torch_stable.html:
    • torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    • torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    • torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl

    The URL code of the + symbol is %2B. When searching for a file in the above website, replace the + symbol in the file name with %2B.

    For example, torch-1.8.1%2Bcu111-cp37-cp37m-linux_x86_64.whl.

  6. Download the Miniconda3-py37_4.12.0-Linux-x86_64.sh installation file (Python 3.7.13) from https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh.
  7. Store the pip source file, torch*.whl file, and Miniconda3 installation file in the context folder, which is as follows:
    context
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    ├── pip.conf
    ├── torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    ├── torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    └── torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
  8. Write the container image Dockerfile.
    Create an empty file named Dockerfile in the context folder and copy the following content to the file:
    # The host must be connected to the public network for creating a container image.
    
    # Base container image at https://github.com/NVIDIA/nvidia-docker/wiki/CUDA
    # 
    # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
    # require Docker Engine >= 17.05
    #
    # builder stage
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04 AS builder
    
    # The default user of the base container image is root.
    # USER root
    
    # Use the PyPI configuration provided by Huawei Mirrors.
    RUN mkdir -p /root/.pip/
    COPY pip.conf /root/.pip/pip.conf
    
    # Copy the installation files to the /tmp directory in the base container image.
    COPY Miniconda3-py37_4.12.0-Linux-x86_64.sh /tmp
    COPY torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl /tmp
    
    # https://conda.io/projects/conda/en/latest/user-guide/install/linux.html#installing-on-linux
    # Install Miniconda3 to the /home/ma-user/miniconda3 directory of the base container image.
    RUN bash /tmp/Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/ma-user/miniconda3
    
    # Install torch*.whl using the default Miniconda3 Python environment in /home/ma-user/miniconda3/bin/pip.
    RUN cd /tmp && \
        /home/ma-user/miniconda3/bin/pip install --no-cache-dir \
        /tmp/torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    
    # Create the final container image.
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04
    
    # Install vim and cURL in Huawei Mirrors.
    RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        apt-get update && \
        apt-get install -y vim curl && \
        apt-get clean && \
        mv /etc/apt/sources.list.bak /etc/apt/sources.list
    
    # Add user ma-user (UID = 1000, GID = 100).
    # A user group whose GID is 100 of the base container image exists. User ma-user can directly use it.
    RUN useradd -m -d /home/ma-user -s /bin/bash -g 100 -u 1000 ma-user
    
    # Copy the /home/ma-user/miniconda3 directory from the builder stage to the directory with the same name in the current container image.
    COPY --chown=ma-user:100 --from=builder /home/ma-user/miniconda3 /home/ma-user/miniconda3
    
    # Configure the preset environment variables of the container image.
    # Set PYTHONUNBUFFERED to 1 to avoid log loss.
    ENV PATH=$PATH:/home/ma-user/miniconda3/bin \
        PYTHONUNBUFFERED=1
    
    # Set the default user and working directory of the container image.
    USER ma-user
    WORKDIR /home/ma-user

    For details about how to write a Dockerfile, see official Docker documents.

  9. Verify that the Dockerfile has been created. The following shows the context folder:
    context
    ├── Dockerfile
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    ├── pip.conf
    ├── torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    ├── torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    └── torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
  10. Create the container image. Run the following command in the directory where the Dockerfile is stored to build the container image pytorch:1.8.1-cuda11.1:
    1
    docker build . -t pytorch:1.8.1-cuda11.1
    
    The following log information displayed during image creation indicates that the image has been created.
    Successfully tagged pytorch:1.8.1-cuda11.1

Step 5 Uploading an Image to SWR

  1. Log in to the SWR console and select the target region.
    Figure 2 SWR console
  2. Click Create Organization in the upper right corner and enter an organization name to create an organization. Customize the organization name. Replace the organization name deep-learning in subsequent commands with the actual organization name.
    Figure 3 Creating an organization
  3. Click Generate Login Command in the upper right corner to obtain a login command.
    Figure 4 Login Command
  4. Log in to the local environment as the root user and enter the login command.
  5. Upload the image to SWR.
    1. Run the following command to tag the uploaded image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker tag pytorch:1.8.1-cuda11.1 swr.{region-id}.{domain}/deep-learning/pytorch:1.8.1-cuda11.1
    2. Run the following command to upload the image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker push swr.{region-id}.{domain}/deep-learning/pytorch:1.8.1-cuda11.1
  6. After the image is uploaded, choose My Images in navigation pane on the left of the SWR console to view the uploaded custom images.

Step 6 Creating a Training Job on ModelArts

  1. Log in to the ModelArts management console and check whether access authorization has been configured for your account. For details, see Configuring Agency Authorization. If you have been authorized using access keys, clear the authorization and configure agency authorization.
  2. In the navigation pane, choose Training Management > Training Jobs. The training job list is displayed by default.
  3. On the Create Training Job page, set required parameters and click Submit.
    • Created By: Custom algorithms
    • Boot Mode: Custom images
    • Image path: image created in Step 5 Uploading an Image to SWR.
    • Code Directory: directory where the boot script file is stored in OBS, for example, obs://test-modelarts/pytorch/demo-code/. The training code is automatically downloaded to the ${MA_JOB_DIR}/demo-code directory of the training container. demo-code (customizable) is the last-level directory of the OBS path.
    • Boot Command: /home/ma-user/miniconda3/bin/python ${MA_JOB_DIR}/demo-code/pytorch-verification.py. demo-code (customizable) is the last-level directory of the OBS path.
    • Resource Pool: Public resource pools
    • Resource Type: Select CPU or GPU.
    • Persistent Log Saving: enabled
    • Job Log Path: Set this parameter to the OBS path for storing training logs, for example, obs://test-modelarts/pytorch/log/.
  4. Check the parameters of the training job and click Submit.
  5. Wait until the training job is completed.

    After a training job is created, the operations such as container image downloading, code directory downloading, and boot command execution are automatically performed in the backend. Generally, the training duration ranges from dozens of minutes to several hours, depending on the training procedure and selected resources. After the training job is executed, the log similar to the following is output.

    Figure 5 Run logs of training jobs with GPU specifications