Example: Creating a Training Job Using a Custom Image
This section describes how to use a custom image to train a model based on the training module of the old version. The training module of the old version is only available for its existing users. For details about how to use custom images in training of the new version, see Using a Custom Image to Train Models (New-Version Training).
The files required in this example are stored in GitHub. This example uses the MNIST dataset downloaded from the MNIST official website.
- mnist_softmax.py: standalone training script
Creating and Uploading a Custom Image
In this example, the Dockerfile file is used to customize an image.
A Linux x86_x64 host is used here. You can purchase an ECS of the same specifications or use an existing local host to create a custom image.
- Install Docker. For details, see https://docs.docker.com/engine/install/binaries/#install-static-binaries.
The following uses the Linux x86_64 OS as an example to describe how to obtain the Docker installation package. Run the following command to install the Docker software:
curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh
If the docker images command is successfully executed, Docker has been installed. In this case, skip this step.
- Obtain a basic image.
A custom image used for a training job must be compiled based on a basic image. For details about the formats of basic image names, see Overview of a Basic Image Package. Run the following command to obtain a basic image for custom images:
docker pull swr.<region>.myhuaweicloud.com/<image org>/<image name>
In addition, you can run the docker images command to view the local image list.
- Compile a Dockerfile for building a custom image.
This example uses a TensorFlow 1.13.2 image. The file name is tf-1.13.2.dockerfile. Run the vi tf-1.13.2.dockerfile command to switch to the Dockerfile.
For details about how to compile Dockerfile, see Dockerfile Reference.
FROM swr.cn-north-4.myhuaweicloud.com/modelarts-job-dev-image/custom-base-cuda10.0-cp36-ubuntu18.04-x86:1.1 # Configure the HUAWEI CLOUD source and install TensorFlow. RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \ sed -i "s@http://.*archive.ubuntu.com@http://repo.myhuaweicloud.com@g" /etc/apt/sources.list && \ sed -i "s@http://.*security.ubuntu.com@http://repo.myhuaweicloud.com@g" /etc/apt/sources.list && \ pip install --trusted-host https://repo.huaweicloud.com -i https://repo.huaweicloud.com/repository/pypi/simple tensorflow==1.13.2 # Configure environment variables. ENV PATH=/root/miniconda3/bin/:$PATH
- Create a custom image.
In the following example, the image is in the cn-north-4 region and belongs to the deep-learning-diy organization. Run the following command in the directory where the tf-1.13.2.dockerfile file resides:
1
docker build -f tf-1.13.2.dockerfile . -t swr.cn-north-4.myhuaweicloud.com/deep-learning-diy/tf-1.13.2:latest
- Push the image to SWR. For details about how to upload an image, see Software Repository for Container User Guide.
The prerequisite is that you have created an organization and obtained the SWR login command. In the following example, the image is in the cn-north-4 region and belongs to the deep-learning-diy organization. Run the following command to push the image to SWR:
1
docker push swr.cn-north-4.myhuaweicloud.com/deep-learning-diy/tf-1.13.2:latest
swr.cn-north-4.myhuaweicloud.com/deep-learning-diy/tf-1.13.2:latest is the SWR URL of the custom image.
Standalone Training
- Upload training code mnist_softmax.py and training data to OBS. Store the code and data in the code root directory so that they can be directly downloaded to the container.
The root directory obs://deep-learning/new/mnist/ is used as an example.
The training code file is obs://deep-learning/new/mnist/.
The data is stored in obs://deep-learning/new/mnist/mnist_data.
- Create a training job using a custom image. Set Data Storage Location and Training Output Path based on site requirements. Set Image Path, Code Directory, and Boot Command as follows:
- Image Path: Enter the SWR URL of the uploaded image.
- Code Directory: Enter the OBS path for storing the training code, that is, the code root directory in 1.
Before a training job is started, ModelArts automatically recursively downloads all content in the code directory to the local path of the container. The local path of the container is /home/work/user-job-dir/${Last level of the code root directory}/. For example, if Code Directory is set to obs://deep-learning/new/mnist, the local path is /home/work/user-job-dir/mnist/, and the code boot file is /home/work/user-job-dir/mnist/mnist_softmax.py.
- Boot Command: bash /home/work/run_train.sh python /home/work/user-job-dir/mnist/mnist_softmax.py --data_url /home/work/user-job-dir/mnist/mnist_data
/home/work/user-job-dir/mnist/mnist_softmax.py is the code boot file, and --data_url /home/work/user-job-dir/mnist/mnist_data is the data storage path.
- After the training job is created, the code directory is downloaded, the custom image is reviewed, and the training job is completed in the background. Generally, training jobs run for a period of time, which may be several minutes or tens of minutes depending on the amount of data and resources you select. After the program is executed successfully, the log similar to the following is outputted:
Figure 1 Run log information
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot