Updated on 2024-10-29 GMT+08:00

Creating a Custom Training Image Using a Preset Image

Principles

If you use a preset image to create a training job and you need to modify or add some software dependencies based on the preset image, you can create a custom image. In this case, on the training job creation page, select a preset image and choose Customize from the framework version drop-down list box.

The process of this method is the same as that of creating a training job based on a preset image. For example:

  • The system automatically injects environment variables, as shown below:
    • PATH=${MA_HOME}/anaconda/bin:${PATH}
    • LD_LIBRARY_PATH=${MA_HOME}/anaconda/lib:${LD_LIBRARY_PATH}
    • PYTHONPATH=${MA_JOB_DIR}:${PYTHONPATH}
  • The selected boot file will be automatically started using Python commands. Ensure that the Python environment is correct. The PATH environment variable is automatically injected. Run the following commands to check the Python version for the training job:
    • export MA_HOME=/home/ma-user; docker run --rm {image} ${MA_HOME}/anaconda/bin/python -V
    • docker run --rm {image} $(which python) -V
  • The system automatically adds hyperparameters associated with the preset image.

Creating a Training Image Using a Preset Image

ModelArts provides deep learning-powered base images such as TensorFlow, PyTorch, and MindSpore images. In these images, the software mandatory for running training jobs has been installed. If the software in the base images cannot meet your service requirements, create new images based on the base images and use the new images to create training jobs.

Perform the following operations to create an image using a training base image:

  1. Install Docker. If the docker images command is executed, Docker has been installed. In this case, skip this step.

    The following uses Linux x86_64 as an example to describe how to obtain the Docker installation package. Run the following command to install Docker:

    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh
  2. Create a folder named context.
    mkdir -p context
  3. Obtain the pip.conf file.
    [global]
    index-url = https://repo.huaweicloud.com/repository/pypi/simple
    trusted-host = repo.huaweicloud.com
    timeout = 120
  4. Create an image based on a training base image provided by ModelArts. Save the edited Dockerfile in the context folder. For details about how to obtain a training base image, see Preset Dedicated Images for Training.
    FROM {Path to the training base image provided by ModelArts}
    
    # Configure pip.
    RUN mkdir -p /home/ma-user/.pip/
    COPY --chown=ma-user:ma-group pip.conf /home/ma-user/.pip/pip.conf
    
    # Configure the preset environment variables of the container image.
    # Add the Python interpreter path to the PATH environment variable.
    # Set PYTHONUNBUFFERED to 1 to prevent log loss.
    ENV PATH=${ANACONDA_DIR}/envs/${ENV_NAME}/bin:$PATH \
        PYTHONUNBUFFERED=1
    
    RUN /home/ma-user/anaconda/bin/pip install --no-cache-dir numpy
  5. Create an image. Run the following command in the directory where the Dockerfile is stored to build the container image training:v1:
    docker build . -t training:v1
  6. Upload the new image to SWR.
    1. Log in to the SWR console and select the target region.
    2. Click Create Organization in the upper right corner and enter an organization name. In this case, deep-learning is used as an example. Replace it in subsequent commands with the actual organization name.
    3. Click Generate Login Command in the upper right corner to obtain a login command. Log in to ECS as user root and enter the login command.
      Figure 1 Login command executed on ECS
    4. Log in to SWR and run the docker tag command to add tags to the image to be uploaded. In this case, deep-learning is used as an example. Replace it with the information configured in a for subsequent commands.
      sudo docker tag tf-1.13.2:latest swr.Actual domain name.com/deep-learning/tf-1.13.2:latest
    5. Run the docker push command to upload the image.
      sudo docker push swr.Actual domain name.com/deep-learning/tf-1.13.2:latest
    6. After the image is uploaded, choose My Images in navigation pane on the left of the SWR console to view the uploaded custom images.

      SWR URL of the custom image: swr.<Region>.myhuaweicloud.com/deep-learning/tf-1.13.2:latest

  7. Create a training job on ModelArts.
    1. Log in to the ModelArts console.
    2. In the navigation pane on the left, choose Model Training > Training Jobs.
    3. Click Create Training Job. On the displayed page, configure the parameters by referring to Table 1. For details about the parameters, see Creating a Production Training Job.
      Table 1 Creating a training job

      Parameter

      Description

      Algorithm Type

      Mandatory. Select Custom algorithm.

      Boot Mode

      Mandatory. Select Preset image and choose the required framework and engine version. In this case, choose Customize for the engine version.

      Image

      Select the image uploaded to SWR for container image.

      Code Directory

      Mandatory. Select the OBS directory where the training code file is stored.

      • Upload code to the OBS bucket beforehand. The total size of files in the directory cannot exceed 5 GB, the number of files cannot exceed 1,000, and the folder depth cannot exceed 32.
      • The training code file is automatically downloaded to the ${MA_JOB_DIR}/demo-code directory of the training container when the training job is started. demo-code is the last-level OBS directory for storing the code. For example, if Code Directory is set to /test/code, the training code file is downloaded to the ${MA_JOB_DIR}/code directory of the training container.

      Boot File

      Mandatory. Select the Python boot script of the training job in the code directory.

      ModelArts supports only the boot file written in Python. Therefore, the boot file must end with .py.